Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!harbinger.cc.monash.edu.au!nntp.coast.net!news-res.gsl.net!news.gsl.net!sgigate.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo From: pablo@sgi.com (Pablo Sanchez) Newsgroups: comp.databases.sybase,comp.answers,news.answers Subject: Sybase FAQ: 1/8 - index Supersedes: Followup-To: comp.databases.sybase Date: 1 Jul 1996 14:29:32 GMT Organization: Silicon Graphics, Inc. Nederland, CO. USA Lines: 249 Approved: news-answers-request@MIT.EDU Message-ID: Reply-To: pablo@sgi.com NNTP-Posting-Host: mew.corp.sgi.com Summary: Info about SQL Server, bcp, isql and other goodies Posting-Frequency: monthly Originator: pablo@mew.corp.sgi.com Xref: news1.ucsd.edu comp.databases.sybase:29417 comp.answers:15540 news.answers:62008 Archive-name: databases/sybase-faq/part1 URL: http://reality.sgi.com/pablo/Sybase_FAQ SYBASE FAQ INDEX [LINK] _version 2.4_ _________________________________________________________________ Keyword and Phrase Search Enter search words/phrases: ____________________ ___ Ignore case? Help! _________________________________________________________________ Index of Sections * Section 0: Acknowledgements & What's New in this Release * Section 1: SQL Server, isql and bcp * Section 2: SQL Hacks/Tricks * Section 3: Performance and Tuning * Section 4: Contributed Software * Section 5: Sybase Technical News * Section 6: Web Links * Section 7: Miscellany _________________________________________________________________ To get a text version of this FAQ: ftp://sgigate.sgi.com/pub/Sybase_FAQ/FAQ_txt.Z [320K] To get the HTML for this FAQ: ftp://sgigate.sgi.com/pub/Sybase_FAQ/FAQ_html.tar.Z [366K] _________________________________________________________________ SQL Server, isql and bcp 1.1) Are there alternatives to row at a time processing? 1.2) How do I start/stop SQL Server when the CPU reboots? 1.3) What's a natural key? 1.4) What traceflags are available? 1.5) How do I use traceflags 5101 and 5102? 1.6) Is it possible to call a UNIX command from within a stored procedure or a trigger? 1.7) How do I move tempdb off of the master device? 1.8) How do I hide my password using isql? 1.9) How do I remove row affected and/or ---- when using isql? 1.10) Why not create all my columns varchar(255)? 1.11) What is a SQL Server anyway? 1.12) How do I bcp null dates? 1.13) What's the best value for cschedspins? 1.14) When should I execute an sp_recompile? 1.15) What are the different types of locks and what do they mean? 1.16) What's the purpose of using holdlock? 1.17) What's the difference between an _update in place_ versus a _deferred update_? - see Q3.9 1.18) Can I use a named pipe to bcp/dump data out or in? 1.19) What's a good example of a transaction? 1.20) What do all the parameters of a a buildmaster -d -yall mean? 1.21) What are some of the hidden/trick DBCC commands? 1.22) How do I set TS Role in order to run DBCC ...? 1.23) How do I clear a log suspend'ed connection? 1.24) How do I manually drop a table? 1.25) How do I correct timeslice -201? 1.26) How do I pipe the output of one isql to another? 1.27) How do I turn off marked suspect on my database? 1.28) The how's and why's on becoming a Certified Sybase Professional (CSPDBA)? 1.29) What is cmaxpktsz good for? see also Q1.20 1.30) Table partitioning FAQ 1.31) Shrinking varchar(m) to varchar(n) _________________________________________________________________ SQL Hacks/Tricks 2.1) Point Characteristic Functions. 2.2) How to implement if-then-else within a select-clause. 2.3) How to invert/pivot a table (also known as the Oracle decode function). 2.4) How to pad with leading zeros an int or smallint. 2.5) Divide by zero and nulls. 2.6) Convert months to financial months. 2.7) Hierarchy traversal - BOMs. _________________________________________________________________ Performance and Tuning 3.1) What are the nitty gritty details on Performance and Tuning? 3.2) What is best way to use temp tables in an OLTP environment? 3.3) What's the difference between clustered and non-clustered indexes? 3.4) Optimistic versus Pessimistic locking? 3.5) How do I force an index to be used? 3.6) Why place tempdb and log on low numbered devices? 3.7) Have I configured enough memory for my SQL Server? 3.8) Why should I use stored procedures? 3.9) I don't understand showplan's output, please explain. 3.10) Poor man's sp_sysmon. 3.11) View MRU-LRU procedure cache chain. _________________________________________________________________ Contributed Software 4.1) sp_freedevice - lists device, size, used and free. 4.2) sp_whodo - augments sp_who by including additional columns: cpu, I/O... 4.3) SQL and sh(1)to dynamically generate a dump/load database command. 4.4) SybPerl - Perl interface to Sybase. 4.5) dbschema.pl - SybPerl script to take a logical snap of a database. 4.6) Sybtcl - TCL interface to Sybase. 4.7) Augmented system stored procedures. 4.8) Examples of Open Client and Open Server programs -- see Q6.11. 4.9) SQL to determine the space used for an index. 4.10) dsql - an isql alternative with command history - see Q4.13. 4.11) xsybmon - an X interface to sp_monitor 4.12) sp_dos - This procedure graphically displays the scope of a object 4.13) sqsh - a superset of dsql with local variables, redirection, pipes and all sorts of goodies. 4.14) sp_getdays - returns days in current month. 4.15) ddl_insert.pl - creates insert DDL for a table. _________________________________________________________________ Sybase Technical News 5.1) Volume 3, Number 2 5.2) Volume 3, Number 3 5.3) Volume 4, Number 1 5.4) Volume 4, Number 2 5.5) Volume 4, Number 3 5.6) Volume 4, Number 4 5.7) Volume 5, Number 1 5.8) Special Supplement -- Migration to System 11 _________________________________________________________________ Web Links 6.1) http://sybase.pnl.gov:2080/Sybase/.Sybase.html 6.2) http://paella.med.yale.edu/topics/database.html 6.3) http://www.acs.ncsu.edu:80/Sybase 6.4) http://www.alaska.net/~pacrim/sybase.html 6.5) Sybase's Replication Server link. 6.6) Sybase's third-party Applications 6.7) Sybase's Online Books - sybooks 6.8) Sybase's third-party bibliography 6.9) Sybase's White Papers 6.10) Sybase's Performance Tips 6.11) Sybase's Open Client, Open Server and other examples 6.12) Sybase Freeware and Shareware http://www.tiac.net/users/sqltech 6.13) ftp://sgigate.sgi.com/pub/Sybase_FAQ/Thawleyhndout.ppt.ps.Z [3670K] The mother ship may be reached at http://www.sybase.com _________________________________________________________________ Miscellany 7.1) What can Sybase IQ do for me? 7.2) Net-review of Sybase books _________________________________________________________________ Acknowledgements ... Thanks to all the folks in comp.databases.sybase who have made this FAQ possible. Searching through the archives I found a lot of stuff that folks posted to the net, things that were sent to me and things that I've come up with. I decided not to give any one particular person credit. For instance, the _Net Book Review_ was originally written by a kind netter and given to me. Someone else added to it and I'm sure that it'll continue to evolve. Rather than having a maintenance headache of attributing who did what, let's just say that the FAQ is _ours_. Not mine but yours. Deal? Corny as it may sound, I really enjoy _our_ news group because of the low noise to signal ratio and how polite we tend to be with one another. Please mail pablo@sgi.com any changes, comments, complaints and/or disagreements and I will respond in due time. Heck I may even fix them. Please abide by the following and include it if redistributed: _Do not use this information for profit but do_ share it with anyone. So who is this guy? _________________________________________________________________ ... What's New in this Release? * Andy Sparrow (andy@ide.com) - index - typo comp.database_s_.sybase * Andrew Mackay (slacour@nfld.com) - Q1.1 - improved on set based processing algorithm * John Bishop (0002030548@mcimail.com) - Q1.30 - partitioning FAQ culled from somewhere... * Andrew Sigmund (asigmund@redshift.com) - tireless editing of the Sybase FAQ * Q2.5 - stolen from the net * Paul Horn (phorn@stbbs.com) - Q1.4 - added trace flag 299 * Phil Allen (phila@ozemail.com.au) - Q2.6 - convert months to financial months * Elton (elton@sybase.com) - Q4.14 - sp_getdays * Q1.31 - stolen from the net * Scott Gray (gray@voicenet.com) - Q2.7 - hierarchy traversal * Q Vincent Yin (umyin@mctrf.mb.ca) - Q4.15 - ddl_insert.pl and suggested this section * Q3.10 - stolen from the net * Q1.4 - cross-referenced Q2.5 * Richard Cramotte, Jr. (RGCramotte@aol.com) forwarding Bob Munson's stored proc - Q4.16 - sp_exam_space * David Whitmarsh (djw@accessio.demon.co.uk) - Q3.11 - dbcc procbuf * Larry Kagan (kagan_larry@jpmorgan.com) - Q1.20 - corrected typos * Q6.13 - Peter Thawley's '96 talk in PostScript rather than PowerPoint * Edward Barlow (ebarlow@nyc.pipeline.com) - spiffy up Q6.12's banner on the index -- Pablo Sanchez | Ph # (415) 933.3812 Fax # (415) 933.2821 pablo@sgi.com | Pg # (800) 930.5635 -or- pablo_p@corp.sgi.com =============================================================================== I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ] ---------------------------------------------------------------------- Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!news.mel.connect.com.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo From: pablo@sgi.com (Pablo Sanchez) Newsgroups: comp.databases.sybase,comp.answers,news.answers Subject: Sybase FAQ: 2/8 - section 1 Supersedes: Followup-To: comp.databases.sybase Date: 1 Jul 1996 14:29:52 GMT Organization: Silicon Graphics, Inc. Nederland, CO. USA Lines: 2769 Approved: news-answers-request@MIT.EDU Message-ID: References: Reply-To: pablo@sgi.com NNTP-Posting-Host: mew.corp.sgi.com Summary: Info about SQL Server, bcp, isql and other goodies Posting-Frequency: monthly Originator: pablo@mew.corp.sgi.com Xref: news1.ucsd.edu comp.databases.sybase:29418 comp.answers:15541 news.answers:62009 Archive-name: databases/sybase-faq/part2 URL: http://reality.sgi.com/pablo/Sybase_FAQ Q1.1: ALTERNATIVE TO ROW AT A TIME PROCESSING _________________________________________________________________ Someone asked how they could speed up their processing. They were batch updating/inserting gobs of information. Their algorithm was something as follows: ... In another case I do: If exists (select record) then update record else insert record I'm not sure which wa[y] is faster or if it makes a difference. I am doing this for as many as 4000 records at a time (calling a stored procedure 4000 times!). I am interesting in knowing any way to improve this. The parameter translation alone on the procedure calls takes 40 seconds for 4000 records. I am using _exec_ in DB-Lib. Would RPC or CT-Lib be better/faster? A netter responded stating that it was faster to ditch their algorithm and to apply a set based strategy: The way to take your approach is to convert the row at a time processing (which is more traditional type of thinking) into a batch at a time (which is more relational type of thinking). Now I'm not trying to insult you to say that you suck or anything like that, we just need to dial you in to think in relational terms. The idea is to do batches (or bundles) of rows rather than processing a single one at a time. So let's take your example (since you didn't give exact values [probably out of kindness to save my eyeballs] I'll use your generic example to extend what I'm talking about): Before: if exists (select record) then update record else insert record New way: 1. Load _all_ your rows into a table named _new_stuff_ in a separate work database (call it _work_db_) and load it using _bcp_ -- no third GL needed. 1. truncate _new_stuff_ and drop all indexes 2. sort your data using UNIX sort and sort it by the clustered columns 3. load it using _bcp_ 4. create clustered index using _with sorted_data_ and any ancillary non-clustered index. 2. Assuming that your target table is called _old_stuff_ 3. Do the _update_ in a single batch: begin tran /* delete any rows in old_stuff which would normally ** would have been updated... we'll insert 'em instead! ** Essentially, treat the update as a delete/insert. */ delete old_stuff from old_stuff, new_stuff where old_stuff.key = new_stuff.key /* insert entire new table: this adds any rows ** that would have been updated before and ** inserts the new rows */ insert old_stuff select * from new_stuff commit tran You can do all this _without_ writing 3-GL, using _bcp_ and a shell script. A word of caution: _Since these inserts/updates are batched orientated you may blow your log if you attempt to do too many at a time. In order to avoid this use the set rowcount_ directive to create _bite-size_ chunks. The original respondent reported a _150%_ reduction in processing time. _________________________________________________________________ Q1.2: HOW TO START/STOP SQL SERVER WHEN CPU REBOOTS _________________________________________________________________ Below is an example of the various files (on _Irix_) that are needed to start/stop a SQL Server. The information can easily be extended to any UNIX platform. The idea is to allow as much flexibility to the two classes of administrators that admin the machine: * The System Administrator * The Database Administrator Any errors introduced by the DBA will not interfere with the System Administrator's job. With that in mind we have the system startup/shutdown file _/etc/init.d/sybase_ invoking a script defined by the DBA: _/usr/sybase/sys.config/{start,stop}.sybase_ _/etc/init.d/sybase_ On some operating systems this file must be linked to a corresponding entry in _/etc/rc.0_ and _/etc/rc.2_ -- see _rc0(1M)_ and _rc2(1M)_ #!/bin/sh # last modified: 10/17/95, sr. # # Make symbolic links so this file will be called during system stop/start. # ln -s /etc/init.d/sybase /etc/rc0.d/K19sybase # ln -s /etc/init.d/sybase /etc/rc2.d/S99sybase # chkconfig -f sybase on # Sybase System-wide configuration files CONFIG=/usr/sybase/sys.config if $IS_ON verbose ; then # For a verbose startup and shutdown ECHO=echo VERBOSE=-v else # For a quiet startup and shutdown ECHO=: VERBOSE= fi case "$1" in 'start') if $IS_ON sybase; then if [ -x $CONFIG/start.sybase ]; then $ECHO "starting Sybase servers" /bin/su - sybase -c "$CONFIG/start.sybase $VERBOSE &" else fi fi ;; 'stop') if $IS_ON sybase; then if [ -x $CONFIG/stop.sybase ]; then $ECHO "stopping Sybase servers" /bin/su - sybase -c "$CONFIG/stop.sybase $VERBOSE &" else fi fi ;; *) echo "usage: $0 {start|stop}" ;; esac _/usr/sybase/sys.config/{start,stop}.sybase_ start.sybase #!/bin/sh -a # # Script to start sybase # # NOTE: different versions of sybase exist under /usr/sybase/{version} # # Determine if we need to spew our output if [ "$1" != "spew" ] ; then OUTPUT=">/dev/null 2>&1" else OUTPUT="" fi # 10.0.2 servers HOME=/usr/sybase/10.0.2 cd $HOME # Start the backup server eval install/startserver -f install/RUN_BU_KEPLER_1002_52_01 $OUTPUT # Start the dataservers # Wait two seconds between starts to minimize trauma to CPU server eval install/startserver -f install/RUN_FAC_WWOPR $OUTPUT sleep 2 eval install/startserver -f install/RUN_MAG_LOAD $OUTPUT exit 0 stop.sybase #!/bin/sh # # Script to stop sybase # # Determine if we need to spew our output if [ -z "$1" ] ; then OUTPUT=">/dev/null 2>&1" else OUTPUT="-v" fi eval killall -9 $OUTPUT dataserver backupserver server_check sleep sybmultbuf exit 0 _________________________________________________________________ Q1.3: WHAT'S A NATURAL KEY? _________________________________________________________________ Let me think back to my database class... okay, I can't think that far so I'll paraphrase... essentially, a _natural key_ is a key for a given table that uniquely identifies the row. It's natural in the sense that it follows the business or real world need. For example, assume that social security numbers are unique (I believe it is strived to be unique but it's not always the case), then if you had the following employee table: employee: ssn char(09) f_name char(20) l_name char(20) title char(03) Then a natural key would be _ssn_. If the combination of __name_ and _l_name_ were unique at this company, then another _natural key_ would be _f_name, l_name_. As a matter of fact, you can have many _natural keys_ in a given table but in practice what one does is build a surrogate (or artificial) key. The surrogate key is guaranteed to be unique because (wait, get back, here it goes again) it's typically a monotonically increasing value. Okay, my mathematician wife would be proud of me... really all it means is that the key is increasing linearly: i+1 The reason one uses a surrogate key is because your joins will be faster. If we extended our employee table to have a surrogate key: employee: id identity ssn char(09) f_name char(20) l_name char(20) title char(03) Then instead of doing the following: where a.f_name = b.f_name and a.l_name = a.l_name we'd do this: where a.id = b.id We can build indexes on these keys and since Sybase's atomic storage unit is 2K, we can stash more values per 2K page with smaller indexes thus giving us better performance (imagine the key being 40 bytes versus being say 4 bytes... how many 40 byte values can you stash in a 2K page versus a 4 byte value? -- and how much wood could a wood chuck chuck, if a wood chuck could chuck wood?) Does it have anything to do with natural joins? Um, not really... from "A Guide to Sybase..", McGovern and Date, p. 112: The equi-join by definition must produce a result containing two identical columns. If one of those two columns is eliminated, what is left is called the natural join. _________________________________________________________________ Q1.4: Trace Flag Definitions ---------------------------------------------------------------------------- To activate trace flags, add them to the RUN_* script. The following example is using the 1611 and 260 trace flags. Use of these traceflags is not recommended by Sybase. Please use at your own risk. % cd ~sybase/install % cat RUN_BLAND #!/bin/sh # # SQL Server Information: # name: BLAND # master device: /usr/sybase/dbf/BLAND/master.dat # master device size: 25600 # errorlog: /usr/sybase/install/errorlog_BLAND # interfaces: /usr/sybase # /usr/sybase/dataserver -d/usr/sybase/dbf/BLAND/master.dat \ -sBLAND -e/usr/sybase/install/errorlog_BLAND -i/usr/sybase \ -T1611 -T260 ---------------------------------------------------------------------------- Trace Flags Flag Description 200 Displays messages about the before image of the query-tree. 201 Displays messages about the after image of the query-tree. 241 Compress all query-trees whenever the SQL dataserver is started. Reduce TDS (Tabular Data Stream) overhead in stored procedures. This 260 flag should be on for all SQL dataservers unless you are using SQLDebug. 299 This trace flag instructs the dataserver to not recompile a child stored procedure that inherits a temp table from a parent procedure. 302 Print information about the optimizer's index selection. 310 Print information about the optimizer's join selection. 320 Turn off the join order heuristic. 324 Turn off the like optimization for ad-hoc queries using @local_variables. 602 Prints out diagnostic information for deadlock prevention. 603 Prints out diagnostic information when avoiding deadlock. 699 Turn off transaction logging for the entire SQL dataserver. 1204 Send deadlock detection to the errorlog. 1603 Use standard disk I/O (i.e. turn off asynchronous I/O). Create a debug engine start file. This allows you to start up a debug engine which can access the server's shared memory for running diagnostics. I'm not sure how useful this is in a production 1606 environment as the debugger often brings down the server. I'm not sure if Sybase have ported the debug stuff to 10/11. Like most of their debug tools it started off quite strongly but was never developed. Startup only engine 0; use dbcc engine(online) to incrementally 1608 bring up additional engines until the maximum number of configured engines. 1610 Boot the SQL dataserver with TCP_NODELAY enabled. 1611 If possible, pin shared memory -- check errorlog for success/failure. 1613 Set affinity of the SQL dataserver engine's onto particular CPUs -- usually pins engine 0 to processor 0, engine 1 to processor 1... 1615 SGI only: turn on recoverability to filesystem devices. 2512 Prevent dbcc from checking syslogs. Useful when you are constantly getting spurious allocation errors. 3300 Display each log record that is being processed during recovery. You may wish to redirect stdout because it can be a lot of information. 3604 Send dbcc output to screen. 3605 Send dbcc output to errorlog. 3607 Do not recover any database, clear tempdb, or start up checkpoint process. 3608 Recover master only. Do not clear tempdb or start up checkpoint process. 3609 Recover all databases. Do not clear tempdb or start up checkpoint process. 3610 Pre-System 10 behavior: divide by zero to result in NULL instead of error - also see Q2.5. 3620 Do not kill infected processes. 4013 Place a record in the errorlog for each login to the dataserver. Forces all I/O requests to go thru engine 0. This removes the 5101 contention between processors but could create a bottleneck if engine 0 becomes busy with non-I/O tasks. For more information...5101/5102. 5102 Prevents engine 0 from running any non-affinitied tasks. For more information...5101/5102. 7103 Disable table lock promotion for text columns. 8203 Display statement and transaction locks on a deadlock error. Q1.5: TRACE FLAGS -- 5101 AND 5102 _________________________________________________________________ 5101 Normally, each engine issues and checks for its own Disk I/O on behalf of the tasks it runs. In completely symmetric operating systems, this behavior provides maximum I/O throughput for SQL Server. Some operating systems are not completely symmetic in their Disk I/O routines. For these environments, the server can be booted with the 5101 trace flag. While tasks still request disk I/O from any engine, the actual request to/from the OS is performed by engine 0. The performance benefit comes from the reduced or eliminated contention on the locking mechanism inside the OS kernel. To enable I/O affinity to engine 0, start SQL Server with the 5101 Trace Flag. Your errorlog will indicate the use of this option with the message: Disk I/O affinitied to engine: 0 This trace flag only provides performance gains for servers with 3 or more dataserver engines configured and being significantly utilized. _Use of this trace flag with fully symmetric operating systems will degrade performance!_ 5102 The 5102 trace flag prevents engine 0 from running any non-affinitied tasks. Normally, this forces engine 0 to perform Network I/O only. Applications with heavy result set requirements (either large results or many connections issuing short, fast requests) may benefit. This effectively eliminates the normal latency for engine 0 to complete running its user thread before it issues the network I/O to the underlying network transport driver. If used in conjuction with the 5101 trace flag, engine 0 would perform all Disk I/O and Network I/O. For environments with heavy disk and network I/O, engine 0 could easily saturate when only the 5101 flag is in use. This flag allows engine 0 to concentrate on I/O by not allowing it to run user tasks. To force task affinity off engine 0, start SQL Server with the 5102 Trace Flag. Your errorlog will indicate the use of this option with the message: I/O only enabled for engine: 0 _________________________________________________________________ _Warning: Not supported by Sybase. Provided here for your enjoyment._ Q1.6: CALLING UNIX FROM A _TRIGGER_ OR A _STORED PROCEDURE_ _________________________________________________________________ Periodically folks ask if it's possible to make a system command or call a UNIX process from a Trigger or a Stored Procedure. This is _not_ possible but... not from within SQL Server--it has no extension capabilities to allow you to go outside to the operating system or run your own custom C routine as part of the server. The typical ways people have implemented this capability is: 1. Buy Open Server and bind in your own custom stuff (calls to system() or custom C code) and make Sybase RPC calls to it. 2. Have a dedicated client application running on the server box which regularly scans a table and executes the commands written into it (and tucks the results into another table which can have a trigger on it to gather results...). It is somewhat tricky but cheaper than option 1... I know I've (gently) hit Sybase over the years to allow me to extend the SQL Server product by "patching" in my own routines and functions, but they don't see a market need since these two options exist...Some days you just have to eat the cake that you've got... _________________________________________________________________ Q1.7: HOW DO I MOVE _TEMPDB_ OFF OF THE MASTER DEVICE? _________________________________________________________________ Typically, folks place _tempdb_ on a cooked (regular - as opposed to a raw device) file system to gain any write buffering that the Operating System may offer. However, when the SQL Server creates the default _tempdb_ size, the first 2MB are on the _master device_. The following is a method to move _tempdb_ completely off of the _master device_: _It is recommended doing the following within a transaction. _ 1. Dump the master database. 2. Reboot Server in single user mode. 3. Create a dummy database on a file system. The size of the dummy db depends on whether or not you have altered _tempdb_. If _tempdb_ has not been altered then just create 2MB. 4. Delete row(s) in _sysusages_ for _tempdb_. 5. Delete row(s) in _sysdatabses_ for _tempdb_ 6. Make dummy database to be the _tempdb_ by doing as follows: 1> update sysusages 2> set dbid = 2 where dbid = db_id(_dummy database_) 3> go 1> update sysdatabases set dbid = 2, name = "tempdb" 2> where dbid = db_id(_dummy database name_) 3> go 7. Turn on _select into/bulkcopy_ option on the new _tempdb_. 8. Reboot Server (not in single user mode). A quick alternative Another method is to drop the segments, the only drawback is that the 2MB in master are wasted (but so what!): 1. Alter tempdb on another device: 1> alter database tempdb on ... 2> go 2. Use the tempdb: 1> use tempdb 2> go 3. Drop the segments: 1> sp_dropsegment "default", tempdb, master 2> go 1> sp_dropsegment logsegment, tempdb, master 2> go _________________________________________________________________ Q1.8: HIDING YOUR PASSWORD TO _ISQL_ _________________________________________________________________ Here are a menagerie (I've always wanted to use that word) of different methods to hide your password. Pick and choose whichever fits your environment best: Single SQL Server on host Script #1 Assuming that you are using bourne shell _sh(1)_ as your scripting language you can put the password in a file and substitute the file where the password is needed. #!/bin/sh # invoke say ISQL or something.... ( cat $HOME/dba/_password_file_ cat Script #2 #!/bin/sh umask 077 cat Script #3 #!/bin/sh umask 077 cat Script #3 #!/bin/sh umask 077 isql -Umyuserid -Smyserver Script #4 #!/bin/sh umask 077 isql -Umyuserid -Smyserver Script #5 #!/bin/sh echo 'mypassword use mydb go sp_who go' | isql -Umyuserid -Smyserver Script #6 #!/bin/sh echo "`myScriptForGeneratingPasswords myServer` use mydb go sp_who go" | isql -Umyuserid -Smyserver Script #7 Apparently solaris precludes hiding passwords. While _isql_ tries, solaris is too smart for it and puts them back on the command line. We just came up with a clever option for this: isql -w000000000000000000000000000000000001 -Pmypass ... Apparently solaris' _ps(1)_ is too brain dead to to wrap and will only show the first 40 characters or so of the command so you need to guarantee that the password is after that. I think this is 100%, but we will be fooling around with it a bit more. Multiple SQL Servers on host Again, assuming that you are using bourne shell as your scripting language, you can do the following: 1. Create a _global file_. This file will contain passwords, generic functions, master device for the respective DSQUERY. 2. In the actual scripts, source in the _global file_. _Global File_ SYBASE=/usr/sybase my_password() { case $1 in SERVER_1) PASSWD="this";; SERVER_2) PASSWD="is";; SERVER_3) PASSWD="bogus;; *) return 1;; esac return 0 } Generic Script #!/bin/sh -a # # Use "-a" for auto-export of variables # # "dot" the file - equivalent to csh() "source" command . $HOME/dba/_global_file_ DSQUERY=$1 # Determine the password: sets PASSWD my_password $DSQUERY if [ $? -ne 0 ] ; then # error! echo "" exit 1 fi # invoke say ISQL or something.... echo "$PASSWD dbcc ... go" | $SYBASE/bin/isql -U sa -S $DSQUERY -w1000 _______________________________________________________________ Q1.9: HOW TO REMOVE _ROW AFFECTED_ AND _---_ _________________________________________________________________ If you pipe the output of _isql_ then you can use _sed(1)_ to remove this extraneous output: echo "$PASSWD sp_who go" | isql -U sa -S MY_SERVER | sed -e '/affected/d' -e '/---/d' If you simply wish to eliminate the _row affected_ line use the _set nocount on_ switch. _________________________________________________________________ Q1.10: WHY NOT MAX OUT ALL MY COLUMNS? _________________________________________________________________ People occasionally ask the following valid question: Suppose I have varying lengths of character strings none of which should exceed 50 characters. _Is there any advantage of last_name varchar(50) over this last_name varchar(255)?_ That is, for simplicity, can I just define all my varying strings to be varchar(255) without even thinking about how long they may actually be? Is there any storage or performance penalty for this. There is no performance penalty by doing this but as another netter pointed out: If you want to define indexes on these fields, then you should specify the smallest size because the sum of the maximal lengths of the fields in the index can't be greater than 256 bytes. and someone else wrote in saying: Your data structures should match the business requirements. This way the data structure themselves becomes a data dictionary for others to model their applications (report generation and the like). _________________________________________________________________ Q1.11: What is a SQL Server? ---------------------------------------------------------------------------- Overview Before Sybase System 10 (as they call it) we had Sybase 4.x. Sybase System 10 has some significant improvements over Sybase 4.x product line. Namely: * the ability to allocate more memory to the dataserver without degrading its performance. * the ability to have more than one database engine to take advantage of multi-processor cpu machines. * a minimally intrusive process to perform database and transaction dumps. Background and More Terminology A SQL Server is simply a Unix process. It is also known as the database engine. It has multiple threads to handle asynchronous I/O and other tasks. The number of threads spawned is the number of engines (more on this in a second) times five. This is the current implementation of Sybase System 10, 10.0.1 and 10.0.2 on IRIX 5.3. Each SQL dataserver allocates the following resources from a host machine: * memory and * raw partition space. Each SQL dataserver can have up to 255 databases. In most implementations the number of databases is limited to what seems reasonable based on the load on the SQL dataserver. That is, it would be impractical to house all of a large company's databases under one SQL dataserver because the SQL dataserver (a Unix process) will become overloaded. That's where the DBA's experience comes in with interrogation of the user community to determine how much activity is going to result on a given database or databases and from that we determine whether to create a new SQL Server or to house the new database under an existing SQL Server. We do make mistakes (and businesses grow) and have to move databases from one SQL Server to another. And at times SQL Servers need to move from one CPU server to another. With Sybase System 10, each SQL Server can be configured to have more than one engine (each engine is again a Unix process). There's one primary engine that is the master engine and the rest of the engines are subordinates. They are assigned tasks by the master. Interprocess communication among all these engines is accomplished with shared memory. Some times when a DBA issues a Unix kill command to extinguish a maverick SQL Server, the subordinate engines are forgotten. This leaves the shared memory allocated and eventually we may get in to situations where swapping occurs because this memory is locked. To find engines that belong to no master SQL Server, simple look for engines owned by /etc/init (process id 1). These engines can be killed -- this is just FYI and is a DBA duty. Before presenting an example of a SQL Server, some other topics should be covered. Connections A SQL Server has connections to it. A connection can be viewed as a user login but it's not necessarily so. That is, a client (a user) can spark up multiple instances of their application and each client establishes its own connection to the SQL dataserver. Some clients may require two or more per invocation. So typically DBA's are only concerned with the number of connections because the number of users typically does not provide sufficient information for us to do our job. Connections take up SQL Server resources, namely memory, leaving less memory for the SQL Servers' available cache. SQL Server Buffer Cache In Sybase 4.0.1 there was a limit to the amount of memory that could be allocated to a SQL Server. It was around 80MB, with 40MB being the typical max. This was due to internal implementations of Sybase's data structures. With Sybase System 10 there really is no limit. For instance, we have a SQL Server cranked up to 300MB. The memory in a SQL Server is primarily used to cache data pages from disk. Consider that the SQL Server is a light weight Operating System: handling user (connections), allocating memory to users, keeping track of which data pages need to be flushed to disk and the sort. Very sophisticated and complex. Obviously if a data page is found in memory it's much faster to retrieve than going out to disk. Each connection takes away a little bit from the available memory that is used to cache disk pages. Upon startup, the SQL Server pre-allocates the memory that is needed for each connection so it's not prudent to configure 500 connections when only 300 are needed. We'd waste 200 connections and the memory associated with that. On the other hand, it is also imprudent to under configure the number of connections; users have a way of soaking up a resource (like a SQL Server) and if users have all the connections a DBA cannot get into the server to allocate more connections. One of the neat things about a SQL Server is that it reaches (just like a Unix process) a working set. That is, upon startup it'll do a lot of physical I/O's to seed its cache, to get lookup information for typical transactions and the like. So initially, the first users have heavy hits because their requests have to be performed as a physical I/O. Subsequent transactions have less physical I/O and more logical I/O's. Logical I/O is an I/O that is satisfied in the SQL Servers' buffer cache. Obviously, this is the preferred condition. DSS vs OLTP We throw around terms like everyone is supposed to know this high tech lingo. The problem is that they are two different animals that require a SQL Server to be tuned accordingly for each. Well, here's the low down. DSS Decision Support System OLTP Online Transaction Processing What do these mean? OLTP applications are those that have very short orders of work for each connection: fetch this row and with the results of it update one or two other rows. Basically, small number of rows affected per transaction in rapid sucession, with no significant wait times between operations in a transaction. DSS is the lumbering elephant in the database world (unless you do some tricks... out of this scope). DSS requires a user to comb through gobs of data to aggregate some values. So the transactions typically involve thousands of rows. Big difference than OLTP. We never want to have DSS and OLTP on the same SQL Server because the nature of OLTP is to grab things quickly but the nature of DSS is to stick around for a long time reading tons of information and summarizing the results. What a DSS application does is flush out the SQL Server's data page cache because of the tremendous amount of I/O's. This is obviously very bad for OTLP applications because the small transactions are now hurt by this trauma. When it was only OLTP a great percentage of I/O was logical (satisfied in the cache); now transactions must perform physical I/O. That's why it's important in Sybase not to mix DSS and OLTP, at least until System 11 arrives. Sybase System 11 release will allow for the mixing of OLTP and DSS by allowing the DBA to partition (and name) the SQL Server's buffer cache and assign it to different databases and/or objects. The idea is to allow DSS to only affect their pool of memory and thus allowing OLTP to maintain its working set of memory. Asynchronous I/O Why async I/O? The idea is in a typical online transaction processing (OLTP) application you have many connections (over 200 connections) and short transactions: get this row, update that row. These transactions are typically spread across different tables of the databases. The SQL Server can then perform each one of these asynchronously without having to wait for others to finish. Hence the importance of having async I/O fixed on our platform. Engines Sybase System 10 can have more than one engine (as stated above). Sybase has trace flags to pin the engines to a given CPU processor but we typically don't do this. It appears that the master engine goes to processor 0 and subsequent subordinates to the next processor. Currently, Sybase does not scale linearly. That is, five engines doesn't make Sybase perform five times as fast however we do max out with four engines. After that, performs starts to degrade. This is supposed to be fixed with Sybase System 11. Putting Everything Together As previously mentioned, a SQL Server is a collection of databases with connections (that are the users) to apply and retrieve information to and from these containers of information (databases). The SQL Server is built and its master device is typically built over a medium sized (50MB) raw partition. The tempdb is built over a cooked (regular - as opposed to a raw device) file system to realize any performance gains by buffered writes. The databases themselves are built over the raw logical devices to ensure their integrity. Physical and Logical Devices Sybase likes to live in its own little world. This shields the DBA from the outside world known as Unix (or VMS). However, it needs to have a conduit to the outside world and this is accomplished via devices. All physical devices are mapped to logical devices. That is, given a physical device (such as /lv1/dumps/tempdb_01.efs or /dev/rdsk/dks1ds0) it is mapped by the DBA to a logical device. Depending on the type of the device, it is allocated, by the DBA, to the appropriate place (vague enough?). Okay, let's try and clear this up... Dump Device The DBA may decide to create a device for dumping the database nightly. The DBA needs to create a dump device. We'll call that logically in the database datadump_for_my_db but we'll map it to the physical world as /lv1/dumps/in_your_eye.dat So the DBA will write a script that connects to the SQL Server and issues a command like this: dump database my_stinking_db to datadump_for_my_db go and the backupserver (out of this scope) takes the contents of my_stinking_db and writes it out to the disk file /lv1/dumps/in_your_eye.dat That's a dump device. The thing is that it's not preallocated. This special device is simply a window to the operating system. Data and Log Devices Ah, now we are getting into the world of pre-allocation. Databases are built over raw partitions. The reason for this is because Sybase needs to be guaranteed that all its writes complete successfully. Otherwise, if it posted to a file system buffer (as in a cooked file system) and the machine crashed, as far as Sybase is concerned the write was committed. It was not, however, and integrity of the database was lost. That is why Sybase needs raw partitions. But back to the matter at hand... When building a new SQL Server, the DBA determines how much space they'll need for all the databases that will be housed in this SQL Server. Each production database is composed of data and log. The data is where the actual information resides. The log are where the changes are kept. That is, every row that is updated/deleted/inserted gets placed into the log portion then applied to the data portion of the database. That's why DBA strives to place the raw devices for logs on separate disks because everything has to single thread through the log. A transaction is a collection of SQL statements (insert/delete/update) that are grouped together to form a single unit of work. Typically they map very closely to the business. I'll quote the Sybase SQL Server System Administration guide on the role of the log: The transaction log is a write-ahead log. When a user issues a statement that would modify the database, SQL Server automatically writes the changes to the log. After all changes for a statement have been recorded in the log, they are written to an in-cache copy of the data page. The data page remains in cache until the memory is needed for another database page. At that time, it is written to disk. If any statement in a transaction fails to complete, SQL Server reverses all changes made by the transaction. SQL Server writes an "end transaction" record to the log at the end of each transaction, recording the status (success or failure) of the transaction As such, the log will grow as user connections affect changes to the database. The need arises to then clear out the log of all transactions that have been flushed to disk. This is performed by issuing the following command: dump transaction my_stinking_db to logdump_for_my_db go The SQL Server will write to the dumpdevice all transactions that have been committed to disk and will delete the entries from its copy, thus freeing up space in the log. Dumping of the transaction logs is accomplished via cron. We schedule the heavily hit databases every 20 minutes during peak times. A single user can fill up the log by having begin transaction with no corresponding commit/rollback transaction. This is because all their changes are being applied to the log as an open-ended transaction, which is never closed. This open-ended transaction cannot be flushed from the log, and therefore grows until it occupies all of the free space on the log device. And the way we dump it is with a dump device. :-) An Example If the DBA has four databases to plop on this SQL Server and they need a total of 800MB of data and 100MB of log (because that's what really matters to us), then they'd probably do something like this: 1. allocate sufficient raw devices to cover the data portion of all the databases 2. allocate sufficient raw devices to cover the log portion of all the databases 3. start allocating the databases to the devices. For example, assuming the following database requirements: Database Requirements DB Data Log a 300 30 b 400 40 c 100 10 and the following devices: Devices Logical Physical Size dks3d1s2_data /dev/rdsk/dks3d1s2 500 dks4d1s2_data /dev/rdsk/dks4d1s2 500 dks5d1s0_log /dev/rdsk/dks5d1s0 200 then the DBA may elect to create the databases as follows: create database a on dks3d1s2_data = 300 log on dks5d1s0_log = 30 create database b on dks4d1s2_data = 400 log on dks5d1s0_log = 40 create database c on dks3d1s2_data = 50, dks4d1s2_data = 50 log on dks5d1s0_log = 10 Some of the devices will have extra space available because out database allocations didn't use up all the space. That's fine because it can be used for future growth. While the Sybase SQL Server is running, no other Sybase SQL Server can re-allocate these physical devices. TempDB TempDB is simply a scratch pad database. It gets recreated when a SQL Server is rebooted. The information held in this database is temporary data. A query may build a temporary table to assist it; the Sybase optimizer may decide to create a temporary table to assist itself. Since this is an area of constant activity we create this database over a cooked file system which has historically proven to have better performance than raw - due to the buffered writes provided by the Operating System. Port Numbers When creating a new SQL Server, we allocate a port to it (currently, DBA reserves ports 1500 through 1899 for its use). We then map a host name to the different ports: hera, fddi-hera and so forth. We can actually have more than one port number for a SQL Server but we typically don't do this. ---------------------------------------------------------------------------- Q1.12: HOW DO I BCP NULL DATES? _________________________________________________________________ As long as there is _nothing_ between the field delimiters in your data, a null will be entered. If there's a space, the value will be Jan 1, 1900. You can use _sed(1)_ to squeeze blanks out of fields: sed -e 's/|[ ]*|/||/g' old_file > new_file _________________________________________________________________ Q1.13: WHAT'S THE BEST VALUE FOR _CSCHEDSPINS_? _________________________________________________________________ It is crucial to understand that _cschedspins_ is a tunable parameter (recommended values being between 1-2000) and the optimum value is completely dependent on the customer's environment. _cschedspins_ is used by the scheduler only when it finds that there are no runnable tasks. If there are no runnable tasks, the scheduler has two options: 1. Let the engine go to sleep (which is done by an OS call) for a specified interval or until an event happens. This option assumes that tasks won't become runnable because of tasks executing on other engines. This would happen when the tasks are waiting for I/O more than any other resource such as locks. Which means that we could free up the CPU resource (by going to sleep) and let the system use it to expedite completion of system tasks including I/O. 2. Go and look for a ready task again. This option assumes that a task would become runnable in the near term and so incurring the extra cost of an OS context switch through the OS sleep/wakeup mechanism is unacceptable. This scenario assumes that tasks are waiting on resources such as locks, which could free up because of tasks executing on other engines, more than they wait for I/O. _cschedspins_ controls how many times we would choose option 2 before choosing option 1. Setting _cschedspins_ low favors option 1 and setting it high favors option 2. Since an I/O intensive task mix fits in with option 1, setting _cschedspins_ low may be more beneficial. Similarly since a CPU intensive job mix favors option 2, setting _cschedspins_ high may be beneficial. The consensus is that a single cpu server should have _cschedspins_ set to 1. However, I strongly recommend that users carefully test values for _cschedspins_ and monitor the results closely. I have seen more than one site that has shot themselves in the foot so to speak due to changing this parameter in production without a good understanding of their environment. _________________________________________________________________ Q1.14: WHEN SHOULD I EXECUTE AN _SP_RECOMPILE?_ _________________________________________________________________ An _sp_recompile_ should be issued any time a new index is added or an update statistics. Dropping an index will cause an automatic recompile of all objects that are dependent on the table. The _sp_recompile_ command simply increments the _schemacnt_ counter for the given table. All dependent object counter's are checked against this counter and if they are different the SQL Server recompiles the object. _________________________________________________________________ Q1.15: WHAT ARE THE DIFFERENT TYPES OF LOCKS? _________________________________________________________________ First of, just to get it out of the way, there is no method to perform row level locking. If you think you need row level locking, you probably aren't thinking set based -- see Q1.1 for set processing. The SQL Server uses locking in order to ensure that sanity of your queries. Without locking there is no way to ensure the integrity of your operation. Imagine a transaction that debited one account and credited another. If the transaction didn't lock out readers/writers then someone can potentially see erroneous data. Essentially, the SQL Server attempts to use the least intrusive lock possible, page lock, to satisfy a request. If it reaches around 200 page locks, then it escalates the lock to a table lock and releases all page locks thus performing the task more efficiently. There are three types of locks: * page locks * table locks * demand locks Page Locks There are three types of page locks: * shared * exclusive * update shared These locks are requested and used by readers of information. More than one connection can hold a shared lock on a data page. This allows for multiple readers. exclusive The SQL Server uses exclusive locks when data is to be modified. Only _one_ connection may have an exclusive lock on a given data page. If a table is large enough and the data is spread sufficiently, more than one connection may update different data pages of a given table simultaneously. update A update lock is placed during a _delete_ or an _update_ while the SQL Server is hunting for the pages to be altered. While an update lock is in place, there can be shared locks thus allowing for higher throughput. The update lock(s) are promoted to exclusive locks once the SQL Server is ready to perform the _delete/update_. Table Locks There are three types of table locks: * intent * shared * exclusive intent Intent locks indicate the intention to acquire a shared or exclusive lock on a data page. Intent locks are used to prevent other transactions from acquiring shared or exclusive locks on the given page. shared This is similar to a page level shared lock but it affects the entire table. This lock is typically applied during the creation of a non-clustered index. exclusive This is similar to a page level exclusive lock but it affects the entire table. If an _update_ or _delete_ affects the entire table, an exclusive table lock is generated. Also, during the creation of a clustered index an exclusive lock is generated. Demand Locks A demand lock prevents further shared locks from being set. The SQL Server sets a demand lock to indicate that a transaction is next to lock a table or a page. This avoids indefinite postponement if there was a flurry of readers when a writer wished to make a change. _________________________________________________________________ Q1.16: WHAT'S THE PURPOSE OF USING _HOLDLOCK_? _________________________________________________________________ All _select/readtext_ statements acquire shared locks (see Q1.15) to retrieve their information. After the information is retrieved, the shared lock(s) is/are released. The _holdlock_ option is used within _transactions_ so that after the _select/readtext_ statement the locks are held until the end of the transaction: * commit transaction * rollback transaction If the _holdlock_ is not used within a transaction, the shared locks are released. _________________________________________________________________ Q1.18: CAN I USE A NAMED PIPE TO _BCP/DUMP_ DATA OUT OR IN? _System 10 and above._ _________________________________________________________________ If you would like to _bcp_ copy from one table to a named pipe and compress: 1. %mknod bcp.pipe p 2. %compress sysobjects.Z & 3. %bcp master..sysobjects out bcp.pipe -c -U .. > bcp.pipe 4. Use _ps(1)_ to determine when the _compress_ finishes. To bcp _from_ my1db..dummy_table_1 _to_ my2db..dummy_table_2: 1. %mknod bcp.pipe p 2. %bcp my2db..dummy_table_2 in bcp.pipe -c -U .. & To avoid confusion between the above _bcp_ and the next, you may choose to either use a separate window or redirect the output to a file. 3. %bcp my1db..dummy_table_1 out bcp.pipe -c -U .. _________________________________________________________________ Q1.19: WHAT'S A GOOD EXAMPLE OF A TRANSACTION? _________________________________________________________________ This answer is geared for Online Transaction Processing (OTLP) applications. To gain maximum throughput all your transactions should be in stored procedures - see Q3.8. The transactions within each stored procedure should be short and simple. All validation should be done outside of the transaction and only the modification to the database should be done within the transaction. Also, don't forget to name the transaction for _sp_whodo_ - see Q4.2. The following is an example of a _good_ transaction: /* perform validation */ select ... if ... /* error */ /* give error message */ else /* proceed */ begin begin transaction acct_addition update ... insert ... commit transaction acct_addition end The following is an example of a _bad_ transaction: begin transaction poor_us update X .... select ... if ... /* error */ /* give error message */ else /* proceed */ begin update ... insert ... end commit transaction poor_us This is bad because: * the first update on table X is held throughout the transaction. The idea with OLTP is to get in and out _fast_. * If an error message is presented to the end user and we await their response, we'll maintain the lock on table X until the user presses return. If the user is out in the can we can wait for hours. _________________________________________________________________ Q1.20: BUILDMASTER CONFIGURATION DEFINITIONS _________________________________________________________________ _Attention!_ Please notice, be very careful with these parameters. Use only at your own risk. Be sure to have a copy of the original parameters. Be sure to have a dump of all dbs (include master) handy. _________________________________________________________________ The following is a list of configuration parameters and their effect on the SQL Server. Changes to these parameters can affect performance of the server. Sybase does not recommend modifying these parameters without first discussing the change with Sybase Tech Support. This list is provided for information only. These are categorized into two kinds: * Configurable through sp_configure and * not configurable but can be changed through 'buildmaster -y=value -d' Configurable variables: crecinterval: The recovery interval specified in minutes. ccatalogupdates: A flag to inform whether system catalogs can be updated or not. cusrconnections: This is the number of user connections allowed in SQL Server. This value + 3 (one for checkpoint, network and mirror handlers) make the number of pss configured in the server. _________________________________________________________________ cfgpss: Number of PSS configured in the server. This value will always be 3 more than cusrconnections. The reason is we need PSS for checkpoint, network and mirror handlers. THIS IS NOT CONFIGURABLE. _________________________________________________________________ cmemsize: The total memory configured for the Server in 2k units. This is the memory the server will use for both Server and Kernel Structures. For Stratus or any 4k pagesize implementation of SQL Server, certain values will change as appropriate. cdbnum: This is the number of databases that can be open in SQL Server at any given time. clocknum: Variable that defines and controls the number of logical locks configured in the system. cdesnum: This is the number of open objects that can be open at a given point of time. cpcacheprcnt: This is the percentage of cache that should be used for procedures to be cached in. cfillfactor: Fill factor for indexes. ctimeslice: This value is in units of milli-seconds. This value determines how much time a task is allowed to run before it yields. This value is internally converted to ticks. See below the explanations for cclkrate, ctimemax etc. ccrdatabasesize: The default size of the database when it is created. This value is Megabytes and the default is 2Meg. ctappreten: An outdated not used variable. crecoveryflags: A toggle flag which will display certain recovery information during database recoveries. cserialno: An informational variable that stores the serial number of the product. cnestedtriggers: Flag that controls whether nested triggers allowed or not. cnvdisks: Variable that controls the number of device structures that are allocated which affects the number of devices that can be opened during server boot up. If user defined 20 devices and this value is configured to be 10, during recovery only 10 devices will be opened and the rest will get errors. cfgsitebuf: This variable controls maximum number of site handler structures that will be allocated. This in turn controls the number of site handlers that can be active at a given instance. cfgrembufs: This variable controls the number of remote buffers that needs to send and receive from remote sites. Actually this value should be set to number of logical connections configured. (See below) cfglogconn: This is the number of logical connections that can be open at any instance. This value controls the number of resource structure allocated and hence it will affect the overall logical connection combined with different sites. THIS IS NOT PER SITE. cfgdatabuf: Maximum number of pre-read packets per logical connections. If logical connection is set to 10, and cfgdatabuf is set to 3 then the number of resources allocated will be 30. cfupgradeversion: Version number of last upgrade program ran on this server. csortord: Sort order of the SQL Server. cold_sortdord: When sort orders are changed the old sort order is saved in this variable to be used during recovery of the database after the Server is rebooted with the sort order change. ccharset: Character Set used by the SQL server cold_charset: Same as cold_sortord except it stores the previous Character Set. _________________________________________________________________ cdflt_sortord: page # of sort order image definition. This should not be changed at any point. This is a server only variable. cdflt_charset: page # of character set image definition. This should not be changed at any point. This is a server only variable. cold_dflt_sortord: page # of previous sort order image definition. This should not be changed at any point. This is a server only variable. cold_dflt_charset: page # of previous chracter set image definition. This should not be changed at any point. This is a server only variable. _________________________________________________________________ cdeflang: Default language used by SQL Server. cmaxonline: Maximum number of engines that can be made online. This number should not be more than the # of cpus available on this system. On Single CPU system like RS6000 this value is always 1. cminonline: Minimum number of engines that should be online. This is 1 by default. cengadjinterval: A noop variable at this time. cfgstacksz: Stack size per task configured. This doesn't include the guard area of the stack space. The guard area can be altered through cguardsz. _________________________________________________________________ cguardsz: This is the size of the guard area. The Sql Server will allocate stack space for each task by adding cfgstacksz (configurable through sp_configure) and cguardsz (default is 2K). This has to be a multiple of PAGESIZE which will be 2k or 4k depending on the implementation. cstacksz: Size of fixed stack space allocated per task including the guard area. _________________________________________________________________ Non-configurable values : _________________________________________________________________ _TIMESLICE, CTIMEMAX ETC:_ _________________________________________________________________ 1 millisecond = 1/1000th of a second. 1 microsecond = 1/1000000th of a second. "Tick" : Interval between two clock interrupts occur in real time. "cclkrate" : A value specified in microsecond units. Normally on systems where a fine grained timer is not available or if the Operating System cannot set sub-second alarms, this value is set to 1000000 milliseconds which is 1 second. In other words an alarm will go off every 1 second or you will get 1 tick per second. On Sun4 this is set to 100000 milliseconds which will result in an interrupt going at 1/10th of a second. You will get 6 ticks per second. "avetimeslice" : A value specified in millisecond units. This is the value given in "sp_configure",. Otherwise the milliseconds are converted to milliseconds and finally to tick values. ticks = * 1000 / cclkrate. "timeslice" : _________________________________________________________________ The unit of this variable is in ticks. This value is derived from "avetimeslice". If "avetimeslice" is less than 1000 milliseconds then timeslice is set to 1 tick. "ctimemax" : The unit of this variable is in ticks. A task is considered in infinite loop if the consumed ticks for a particular task is greater than ctimemax value. This is when you get timeslice -201 or -1501 errors. "cschedspins" : For more information see Q1.13. This value alters the behavior of the SQL Server scheduler. The scheduler will either run a qualified task or look for I/O completion or sleep for a while before it can do anything useful. The cschedspins value determines how often the scheduler will sleep and not how long it will sleep. A low value will be suited for a I/O bound SQL Server but a high value will be suited for CPU bound SQL Server. Since the SQL Server will be used in a mixed mode, this value need to be fined tuned. Based on practical behavior in the field, a single engine SQL Server should have cschedspins set to 1 and a multi-engine server should have set to 2000. Now that we've defined the units of these variables what happens when we change cclkrate ? Assume we have a cclkrate=100000. A clock interrupt will occur every (100000/1000000) 1/10th milliseconds. Assuming a task started with 1 tick which can go upto "ctimemax=1500" ticks can potentially take 1/10us * (1500 + 1) ticks which will be 150 milliseconds or approx. .15 milliseconds per task. Now changing the cclkrate to 75000 A clock interrupt will occur every (75000/1000000) 1/7th milliseconds. Assuming a task started with 1 tick which can go upto ctimemax=1500 ticks can potentially take 1/7us * (1500 + 1) ticks which will be 112 milliseconds or approx. .11 milliseconds per task. Decreasing the cclkrate value will decrease the time spent on each task. If the task couldnot voluntarily yield within the time, the scheduler will kill the task. UNDER NO CIRCUMSTANCES the cclkrate value should be changed. The default ctimemax value should be set to 1500. This is an empirical value and this can be changed under special circumstances and strictly under the guidance of DSE. _________________________________________________________________ cfgdbname: Name of the master device is saved here. This is 64 bytes in length. cfgpss: This is a derived value from cusrconnections + 3. See cusrconnections above. cfgxdes: This value defines the number of transactions that can be done by a task at a given instance. Changing this value to be more than 32 will have no effect on the server. cfgsdes: This value defines the number of open tables per task. This will be typically for a query. This will be the number of tables specified in a query including subqueries. Sybase Advises not to change this value. There will be significant change in the size of per user resource in SQL Server. cfgbuf: This is a derived variable based on the total memory configured and subtracting different resource sizes for Databases, Objects, Locks and other Kernel memories. cfgdes: This is same as cdesnum. Other values will have no effect on it. cfgprocedure: This is a derived value. Based on cpcacheprcnt variable. cfglocks: This is same as clocknum. Other values will have no effect on it. cfgcprot: This is variable that defines the number of cache protectors per task. This is used internally by the SQL Server. Sybase advise not to modify this value as a default of 15 will be more than sufficient. cnproc: This is a derived value based on cusrconnections + for Sybase internal tasks that are both visible and non-visible. cnmemmap: This is an internal variable that will keep track of SQL Server memory. Modifying this value will not have any effect. cnmbox: Number of mail box structures that need to be allocated. More used in VMS environment than UNIX environment. cnmsg: Used in tandem with cnmbox. cnmsgmax: Maximum number of messages that can be passed between mailboxes. cnblkio: Number of disk I/O request (async and direct) that can be processed at a given instance. This is a global value for all the engines and not per engine value. This value is directly depended on the number of I/O request that can be processed by the Operating System. It varies depending on the Operating System. cnblkmax: Maximum number of I/O request that can be processed at any given time. Normally cnblkio,cnblkmax and cnmaxaio_server should be the same. cnmaxaio_engine: Maximum number of I/O request that can be processed by one engine. Since engines are Operating System Process, if there is any limit imposed by the Operating System on a per process basis then this value should be set. Otherwise it is a noop. cnmaxaio_server: This is the total number of I/O request the SQL Server can do. This value s directly depended on the number of I/O request that can be processed by the Operating System. It varies depending on the Operating System. csiocnt: not used. cnbytio: Similar to disk I/O request, this is for network I/O request. This includes disk/tape dumps also. This value is for the whole SQL Server including other engines. cnbytmax: Maximum number of network I/O request including disk/tape dumps. cnalarm: Maximum number of alarms including the alarms used by the system. This is typically used when users do "waitfor delay" commands. cfgmastmirror: Mirror device name for the master device. cfgmastmirror_stat: Status of mirror devices for the master device like serial/dynamic mirroring etc. cindextrips: This value determines the aging of a index buffer before it is removed from the cache. coamtrips: This value determines the aging of a OAM buffer before it is removed from the cache. cpreallocext: This value determines the number of extents that will be allocated while doing BCP. cbufwashsize: This value determines when to flush buffers in the cache that are modified. Q1.21: DBCC COMMAND REFERENCE _________________________________________________________________ If you know of any more DBCC Commands, please mail to pablo@sgi.com. For your consumption here they are, use at your own risk: * allocdump( dbid, page ) * bhash( { print_bufs | no_print }, bucket_limit ) * buffer( [ dbid ][, objid ][, nbufs ], printopt = { 0 | 1 | 2 }, buftype ) * bytes( startaddress, length ) * checkalloc[ ( dbname [, fix | nofix ] ) ] * checkdb[( dbname [, skip_ncindex ] ) ] * checktable( tablename | tabid [, skip_ncindex ] ) * dbinfo( [ dbname ] ) * dbrepair( dbid, option = { dropdb | fixindex | fixsysindex }, table, indexid ) * dbtable( dbid ) * delete_row( dbid, pageid, delete_by_row = { 1 | 0 }, rownum ) * des( [ dbid ][, objid ] ) * extentcheck( dbid, objid, indexid, sort = {1|0} ) * extentdump( dbid, page ) * extentzap( dbid, objid, indexid, sort ) * findnotfullextents( dbid, objid, indexid, sort = { 1 | 0 } ) * fix_al( [ dbname ] ) * help( dbcc_command ) * ind( dbid, objid, printopt = { 0 | 1 | 2 } ) * indexalloc(tablename|tabid, indid, [full | optimized | fast],[fix | nofix]) * locateindexpgs( dbid, objid, page, indexid, level ) * lock * log( [dbid][,objid][,page][,row][,nrecords][,type={-1..36}],printopt={0 |1} ) * memusage * netmemshow( option = {1 | 2 | 3} ) * netmemusage * newalloc( dbname, option = { 1 | 2 | 3 } ) * page( dbid, pagenum [, printopt={0|1|2} ][, cache={0|1} ][, logical={1|0} ] ) * pglinkage( dbid, start, number, printopt={0|1|2}, target, order={1|0} ) * pktmemshow( option = {spid} ) * procbuf( dbid, objid, nbufs, printopt = { 0 | 1 } ) * prtipage( dbid, objid, indexid, indexpage ) * pss( suid, spid, printopt = { 1 | 0 } ) * rebuildextents( dbid, objid, indexid ) * resource * show_bucket( dbid, pageid, lookup_type ) * tab( dbid, objid, printopt = { 0 | 1 | 2 } ) * tablealloc(tablename|tabid, [full | optimized | fast],[fix | nofix]) * traceoff( tracenum [, tracenum ... ] ) * traceon( tracenum [, tracenum ... ] ) * undo( dbid, pageno, rowno ) _________________________________________________________________ Q1.22: HOW TO SET _TS ROLE_ _________________________________________________________________ Some _DBCC_ commands require that you set _TS Role_ in order to run them. Here's how to set it: Login to Server as _sa_ and perform the following: sp_role "grant", sybase_ts_role, sa go set role "sybase_ts_role" on go _________________________________________________________________ Q1.23: HOW TO CLEAR A _LOG SUSPEND_ _________________________________________________________________ In System 10, when you create a database with its log on its own device SQL Server will tell you the threshold of your log with the number of log pages and as you alter the log segment the threshold will then becoming larger. In this case you're expected to write this info down so you can create a threshold manager by creating a stored procedure that dumps the transaction whenever SQL Server hits that threshold. This way you never get to see error 1105. Otherwise, what's going to happen is that SQL Server will suspend all the activities when the threshold is hit. Anything you do including _dump tran with no_log_ will not clear the log and the only way you can clear it is by unsuspending the process by the following command: 1> select lct_admin ("unsuspend", db_id) 2> go immediately followed by: dump tran _db_name_ with truncate_only or when the above command doesn't work you need to issue: dump tran _db_name_ with no_log If you wish to retain pre System 10 behavior where SQL Server will abort the process when the threshold is hit then you need to set _abort xact on log full_ through _sp_dboption_ on each database. That way when the threshold is hit your process will be aborted, the transaction will be rolled back and you won't see your process gets into _log suspend_ mode. _________________________________________________________________ Q1.24: HOW TO MANUALLY DROP A TABLE _________________________________________________________________ Occasionally you may find that after issuing a _drop table_ command that the SQL Server crashed and consequently the table didn't drop entirely. Sure you can't see it but that sucker is still floating around somewhere. Here's a list of instructions to follow when trying to drop a corrupt table: 1. sp_configure allow, 1 go reconfigure with override go 2. Write _db_id_ down. use _db_name_ go select db_id() go 3. Write down the _id_ of the _bad_table_: use master go select id from sysobjects where name = _bad_table_name_ go 4. You will need these index IDs to run _dbcc extentzap_. Also, remember that if the table has a clustered index you will need to run _extentzap_ on index "0", even though there is no sysindexes entry for that indid. select indid from sysindexes where id = _table_id_ go 5. This is not required but a good idea: begin transaction go 6. Type in this short script, this gets rid of all system catalog information for the object, including any object and procedure dependencies that may be present. Some of the entries are unnecessary but better safe than sorry. declare @obj int select @obj = id from sysobjects where name = delete syscolumns where id = @obj delete sysindexes where id = @obj delete sysobjects where id = @obj delete sysprocedures where id in (select id from sysdepends where depid = @obj) delete sysdepends where depid = @obj delete syskeys where id = @obj delete syskeys where depid = @obj delete sysprotects where id = @obj delete sysconstraints where tableid = @obj delete sysreferences where tableid = @obj go 7. Just do it! commit transaction go 8. Gather information to run _dbcc extentzap_: sp_dboption _db_name_, read, true go use _db_name_ go checkpoint go 9. Run _dbcc extentzap_ once for _each_ index (including index 0, the data level) that you got from above: use master go dbcc traceon (3604) go dbcc extentzap (_db_id_, _obj_id_, _indx_id_, 0) go dbcc extentzap (_db_id_, _obj_id_, _indx_id_, 1) go Notice that extentzap runs _twice_ for each index. This is because the last parameter (the _sort_ bit) might be 0 or 1 for each index, and you want to be absolutely sure you clean them all out. 10. Clean up after yourself. sp_dboption _db_name_, read, false go use _db_name_ go checkpoint go sp_configure allow, 0 go reconfigure with override go _________________________________________________________________ Q1.25: HOW DO I CORRECT _TIMESLICE -201_ _________________________________________________________________ Why Increase It? Basically, it will allow for a task to be scheduled onto the CPU in a longer time. Each task on the system is scheduled onto the CPU for a fixed period of time, called the timeslice, during which it does some work, which is resumed when its next turn comes around. The process has up until the value of _ctimemax_ (a config block variable) to finish its task. As the task is working away, the scheduler counts down ctimemax units. When it gets to the value of _ctimemax_ - 1, if it gets _stuck_ and for some reason cannot be taken off the CPU, then a timeslice error gets generated and the process gets infected. On the other hand, SQL Server will allow a Server process to run as long as it needs to. It will not swap the process out for another process to run. The process will decide when it is "done" with the Server CPU. If, however, a process goes on and on and never relinquishes the Server CPU, then Server will timeslice the process. Potential Fix 1. Shutdown the SQL Server 2. %buildmaster -d_your_device_ -yctimemax=2000 3. Restart your SQL Server. If the problem persists contact Sybase Technical Support notifying them what you have done already. _________________________________________________________________ Q1.26: HOW DO I PIPE THE OUTPUT OF ONE _ISQL_ TO ANOTHER? _________________________________________________________________ The following example queries _sysdatabases_ and takes each database name and creates a string of the sort _sp_helpdb dbname_ and sends the results to another _isql_. This is accomplished using bourne shell _sh(1)_ and _sed(1)_ to strip unwanted output (see Q1.9): #!/bin/sh PASSWD=yuk DSQUERY=GNARLY_HAIRBALL echo "$PASSWD print \"$PASSWD\" go select 'sp_helpdb ' + name + char(10) + 'go' from sysdatabases go" | isql -U sa -S $DSQUERY -w 1000 | \ sed -e '/affected/d' -e '/---/d' -e '/Password:/d' | \ isql -U sa -S $DSQUERY -w 1000 To help you understand this you may wish to comment any series of pipes and see what output is being generated. _________________________________________________________________ Q1.27: HOW DO I TURN OFF _MARKED SUSPECT_ ON MY DATABASE? _________________________________________________________________ Say one of your database is marked suspect as the SQL Server is coming up. Here are the steps to take to unset the flag. _Remember to fix the problem that caused the database to be marked suspect after switching the flag. _ Pre System 10 1. sp_configure "allow", 1 2. reconfigure with override 3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") - save this value. 4. begin transaction 5. update sysdatabases set status = _-32767_ where dbid = db_id("my_hosed_db") 6. commit transaction 7. you should be able to access the database for it to be cleared out. If not: 1. shutdown 2. startserver -f RUN_* 8. _fix the problem that caused the database to be marked suspect_ 9. begin transaction 10. update sysdatabases set status = _saved_value_ where dbid = db_id("my_hosed_db") 11. commit transaction 12. sp_configure "allow", 0 13. reconfigure System 10 1. sp_configure "allow", 1 2. reconfigure with override 3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") - save this value. 4. begin transaction 5. update sysdatabases set status = _-32768_ where dbid = db_id("my_hosed_db") 6. commit transaction 7. shutdown 8. startserver -f RUN_* 9. _fix the problem that caused the database to be marked suspect_ 10. begin transaction 11. update sysdatabases set status = _saved_value_ where dbid = db_id("my_hosed_db") 12. commit transaction 13. sp_configure "allow", 0 14. reconfigure 15. shutdown 16. startserver -f RUN_* _________________________________________________________________ Q1.28: CERTIFIED SYBASE PROFESSIONAL - _CSPDBA_ _________________________________________________________________ Here's a list of commonly asked questions about becoming a _CSPDBA_: What are the exams like? The exams are administered by Drake Testing and Technologies and are given at Drake authorized testing centers. The Environment and Operations exams each take an hour, and the Fundamentals exam takes an hour and a half. Each exam contains between 60 and 90 questions. Many of the questions are _multiple choice_, some are _select all that apply_ and some are _fill in the blank_. Depending on the exam, a score of 67% - 72% is required to pass. The exams are challenging, but fair. Before taking an exam, Drake provides you with a short _tutorial exam_ that you can take to get an idea of the format of the exam questions. You receive a report each time you complete an exam. The report shows the passing score, your total score, and your score in various sections of the exam. (You aren't told which specific questions you answered correctly or incorrectly.) How do I register for the exams? Call 1-800-8SYBASE, select option 2, then option 2 again. You will be connected to a Drake representative. Currently each exam costs $150. What happens once I pass? You will receive a certificate in the mail about a month after you've passed all the exams. When you receive your certificate, you'll also have the opportunity to enter into a licensing agreement that will allow you to use the Certified Sybase Professional service mark (logo) in your office and on your business cards. If your company is an Open Solutions partner, your certification is acknowledged by the appearance of the CSP logo with your company's name in the Open Solutions Directory. If you have a CompuServe account, you can obtain access to a private section of _Sybase OpenLine_, a technical forum on CompuServe. What topics are covered? * Sybase SQL Server Fundamentals Exam Topics: + Sybase client/server architecture + SQL Server objects + Use of tables + Use of indexes + Use of columns + Use of defaults + Use of triggers + Use of keys + Use of check constraints + Use of datatypes + Use of cursors + System datatypes + Views + Data integrity + Rules + Select statements + Transaction management + Locking + Stored procedures + Local and global variables * Sybase SQL Server Environment Exam Topics: + Configuration and control + Starting the SQL Server + Accessing remote servers + Stopping the SQL Server + Using buildmaster + Installing the SQL Server + Using the standard databases + Admin Utilities and Tools + System stored procedures + Using system tables + Load and unload utilities + Resources + Disk mirroring + Creating databases + Managing segments + Managing transaction logs + Managing thresholds + Managing audit logs + Devices + Security + Establishing security + Roles + Managing user accounts * Sybase SQL Server Operations Exam Topics: + Monitoring + Starting the Backup Server + Monitoring the errorlog + Diagnostics + Resolving contention and locking problems + Managing application stored procedures + Recovery + Backup + Load + Backup strategies + Security + Establishing security + Roles + Managing user accounts + Admin utilities and tools + System stored procedures + Using system tables + Load and unload utilities _________________________________________________________________ Q1.29: WHAT IS _CMAXPKTSZ_ GOOD FOR? _________________________________________________________________ _cmaxpktsz_ corresponds to the parameter "maximum network packet size" which you can see through _sp_configure_. I recommend only updating this value through _sp_configure_. If some of your applications send or receive large amounts of data across the network, these applications can achieve significant performance improvement by using larger packet sizes. Two examples are large bulk copy operations and applications reading or writing large text or image values. Generally, you want to keep the value of default network packet size small for users performing short queries, and allow users who send or receive large volumes of data to request larger packet sizes by setting the maximum network packet size configuration variable. _caddnetmem_ corresponds to the parameter "additional netmem" which you can see through _sp_configure_. Again, I recommend only updating this value through _sp_configure_. "additional netmem" sets the maximum size of additional memory that can be used for network packets that are larger than SQL Server's default packet size. The default value for additional netmem is 0, which means that no extra space has been allocated for large packets. See the discussion below, under maximum network packet size, for information on setting this configuration variable. Memory allocated with additional netmem is added to the memory allocated by memory. It does not affect other SQL Server memory uses. SQL Server guarantees that every user connection will be able to log in at the default packet size. If you increase maximum network packet size and additional netmem remains set to 0, clients cannot use packet sizes that are larger than the default size: all allocated network memory will be reserved for users at the default size. In this situation, users who request a large packet size when they log in receive a warning message telling them that their application will use the default size. To determine the value for additional netmem if your applications use larger packet sizes: * Estimate the number of simultaneous users who will request the large packet sizes, and the sizes their applications will request. * Multiply this sum by three, since each connection needs three buffers. * Add 2% for overhead, rounded up to the next multiple of 512 _________________________________________________________________ Q1.30: FAQ ON PARTITIONING _________________________________________________________________ Index of Sections * What Is Table Partitioning? + Page Contention for Inserts + I/O Contention + Caveats Regarding I/O Contention * Can I Partition Any Table? + How Do I Choose Which Tables To Partition? * Does Table Partitioning Require User-Defined Segments? * Can I Run Any Transact-SQL Command on a Partitioned Table? * How Does Partition Assignment Relate to Transactions? * Can Two Tasks Be Assigned to the Same Partition? * Must I Use Multiple Devices to Take Advantage of Partitions? * How Do I Create A Partitioned Table That Spans Multiple Devices? * How Do I Take Advantage of Table Partitioning with bcp in? * Getting More Information on Table Partitioning What Is Table Partitioning? Table partitioning is a procedure that creates multiple page chains for a single table. The primary purpose of table partitioning is to improve the performance of concurrent inserts to a table by reducing contention for the last page of a page chain. Partitioning can also potentially improve performance by making it possible to distribute a table's I/O over multiple database devices. Page Contention for Inserts By default, SQL Server stores a table's data in one double-linked set of pages called a page chain. If the table does not have a clustered index, SQL Server makes all inserts to the table in the last page of the page chain. When a transaction inserts a row into a table, SQL Server holds an exclusive page lock on the last page while it inserts the row. If the current last page becomes full, SQL Server allocates and links a new last page. As multiple transactions attempt to insert data into the table at the same time, performance problems can occur. Only one transaction at a time can obtain an exclusive lock on the last page, so other concurrent insert transactions block each other. Partitioning a table creates multiple page chains (partitions) for the table and, therefore, multiple last pages for insert operations. A partitioned table has as many page chains and last pages as it has partitions. I/O Contention Partitioning a table can improve I/O contention when SQL Server writes information in the cache to disk. If a table's segment spans several physical disks, SQL Server distributes the table's partitions across fragments on those disks when you create the partitions. A fragment is a piece of disk on which a particular database is assigned space. Multiple fragments can sit on one disk or be spread across multiple disks. When SQL Server flushes pages to disk and your fragments are spread across different disks, I/Os assigned to different physical disks can occur in parallel. To improve I/O performance for partitioned tables, you must ensure that the segment containing the partitioned table is composed of fragments spread across multiple physical devices. Caveats Regarding I/O Contention Be aware that when you use partitioning to balance I/O you run the risk of disrupting load balancing even as you are trying to achieve it. The following scenarios can keep you from gaining the load balancing benefits you want: * You are partitioning an existing table. The existing data could be sitting on any fragment. Because partitions are randomly assigned, you run the risk of filling up a fragment. The partition will then steal space from other fragments, thereby disrupting load balancing. * Your fragments differ in size. * The segment maps are configured such that other objects are using the fragments to which the partitions are assigned. * A very large bcp job inserts many rows within a single transaction. Because a partition is assigned for the lifetime of a transaction, a huge amount of data could go to one particular partition, thus filling up the fragment to which that partition is assigned. Can I Partition Any Table? No. You cannot partition the following kinds of tables: 1. Tables with clustered indexes 2. SQL Server system tables 3. Work tables 4. Temporary tables 5. Tables that are already partitioned. However, you can unpartition and then re-partition tables to change the number of partitions. How Do I Choose Which Tables To Partition? You should partition heap tables that have large amounts of concurrent insert activity. (A heap table is a table with no clustered index.) Here are some examples: 1. An "append-only" table to which every transaction must write 2. Tables that provide a history or audit list of activities 3. A new table into which you load data with bcp in. Once the data is loaded in, you can unpartition the table. This enables you to create a clustered index on the table, or issue other commands not permitted on a partition table. Does Table Partitioning Require User-Defined Segments? No. By design, each table is intrinsically assigned to one segment, called the default segment. When a table is partitioned, any partitions on that table are distributed among the devices assigned to the default segment. In the example under "How Do I Create A Partitioned Table That Spans Multiple Devices?", the table sits on a user-defined segment that spans three devices. Can I Run Any Transact-SQL Command on a Partitioned Table? No. Once you have partitioned a table, you cannot use any of the following Transact-SQL commands on the table until you unpartition it: 1. create clustered index 2. drop table 3. sp_placeobject 4. truncate table 5. alter table table_name partition n How Does Partition Assignment Relate to Transactions? A user is assigned to a partition for the duration of a transaction. Assignment of partitions resumes with the first insert in a new transaction. The user holds the lock, and therefore partition, until the transaction ends. For this reason, if you are inserting a great deal of data, you should batch it into separate jobs, each within its own transaction. See "How Do I Take Advantage of Table Partitioning with bcp in?", for details. Can Two Tasks Be Assigned to the Same Partition? Yes. SQL Server randomly assigns partitions. This means there is always a chance that two users will vie for the same partition when attempting to insert and one would lock the other out. The more partitions a table has, the lower the probability of users trying to write to the same partition at the same time. Must I Use Multiple Devices to Take Advantage of Partitions? It depends on which type of performance improvement you want. Table partitioning improves performance in two ways: primarily, by decreasing page contention for inserts and, secondarily, by decreasing i/o contention. "What Is Table Partitioning?" explains each in detail. If you want to decrease page contention you do not need multiple devices. If you want to decrease i/o contention, you must use multiple devices. How Do I Create A Partitioned Table That Spans Multiple Devices? Creating a partitioned table that spans multiple devices is a multi-step procedure. In this example, we assume the following: * We want to create a new segment rather than using the default segment. * We want to spread the partitioned table across three devices, data_dev1, data_dev2, and data_dev3. Here are the steps: 1. Define a segment: sp_addsegment newsegment, my_database,data_dev1 2. Extend the segment across all three devices: sp_extendsegment newsegment, my_database, data_dev2 sp_extendsegment newsegment, my_database, data_dev3 3. Create the table on the segment: create table my_table (names, varchar(80) not null) on newsegment 4. Partition the table: alter table my_table partition 30 How Do I Take Advantage of Table Partitioning with bcp in? You can take advantage of table partitioning with bcp in by following these guidelines: 1. Break up the data file into multiple files and simultaneously run each of these files as a separate bcp job against one table. Running simultaneous jobs increases throughput. 2. Choose a number of partitions greater than the number of bcp jobs. Having more partitions than processes (jobs) decreases the probability of page lock contention. 3. Use the batch option of bcp in. For example, after every 100 rows, force a commit. Here is the syntax of this command: bcp table_name in filename -b100 Each time a transaction commits, SQL Server randomly assigns a new partition for the next insert. This, in turn, reduces the probability of page lock contention. Getting More Information on Table Partitioning For more information on table partitioning, see the chapter on controlling physical data placement in the SQL Server Performance and Tuning Guide. _________________________________________________________________ Q1.31: SHRINKING VARCHAR(M) TO VARCHAR(N) _________________________________________________________________ Before you start: select max(datalength(column_name)) from affected_table In other words, _please_ be sure you're going into this with your head on straight. How To Change System Catalogs This information is Critical To The Defense Of The Free World, and you would be Well Advised To Do It Exactly As Specified: 1. In master: sp_configure allow, 1 reconfigure with override 2. Use the victim database. 3. _begin tran_ - that way, if you upscrew, you can correct the damage. 4. Perform your update. 5. _check your results_. Did you get the expected number of rows affected? When you _select_ the rows you thought you changed, do you see your changes? If not, _rollback tran_ and try again. 6. When everything is as it should be, _commit tran_ 7. In master: sp_configure allow, 0 reconfigure ...after all, you don't want some other bright kid coming in and making other changes behind yer back. You know what you're doing and why, but you'd be amazed how many others out there don't. _________________________________________________________________ -- Pablo Sanchez | Ph # (415) 933.3812 Fax # (415) 933.2821 pablo@sgi.com | Pg # (800) 930.5635 -or- pablo_p@corp.sgi.com =============================================================================== I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ] ---------------------------------------------------------------------- Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!news.mel.connect.com.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo From: pablo@sgi.com (Pablo Sanchez) Newsgroups: comp.databases.sybase,comp.answers,news.answers Subject: Sybase FAQ: 3/8 - section 2 Supersedes: Followup-To: comp.databases.sybase Date: 1 Jul 1996 14:30:00 GMT Organization: Silicon Graphics, Inc. Nederland, CO. USA Lines: 569 Approved: news-answers-request@MIT.EDU Message-ID: References: Reply-To: pablo@sgi.com NNTP-Posting-Host: mew.corp.sgi.com Summary: Info about SQL Server, bcp, isql and other goodies Posting-Frequency: monthly Originator: pablo@mew.corp.sgi.com Xref: news1.ucsd.edu comp.databases.sybase:29419 comp.answers:15542 news.answers:62010 Archive-name: databases/sybase-faq/part3 URL: http://reality.sgi.com/pablo/Sybase_FAQ Q2.1: Point Characteristic Functions ---------------------------------------------------------------------------- These functions return zero if the condition on columns a and b is not true and one if it is true. Equation Emulation a=b 1-abs(sign(a-b)) a!=b abs(sign(a-b)) ab 1-sign(1-sign(a-b)) x between sign(1+sign(b-x))-sign(1+sign(a-x)) a and b ---------------------------------------------------------------------------- Q2.2: HOW TO IMPLEMENT _IF-THEN-ELSE_ IN A _SELECT_ CLAUSE _________________________________________________________________ If you need to implement the following condition in a _select_ clause: if @val = 'small' then print 'petit' else print 'grand' fi do the following: select isnull(substring('petit', charindex('small', @val), 255), 'grand') To test it out, try the following T-SQL: declare @val char(20) select @val = 'grand' select isnull(substring('petit', charindex('small', @val), 255), 'grand') _________________________________________________________________ Q2.3: HOW TO INVERT/PIVOT A TABLE _________________________________________________________________ In some applications, it's necessary to store details by row but to report the results by column. Here's an example describing the above problem _and_ a solution for it as well - the _sql_ emulates the Oracle _decode_ function: * Say you had the following table... create table #account (acct int, month int, amt int) go * ...and it was populated as follows insert into #account select 1, 1, 10 insert into #account select 1, 2, 10 insert into #account select 1, 3, 10 insert into #account select 1, 4, 10 insert into #account select 1, 5, 10 insert into #account select 1, 6, 10 insert into #account select 1, 7, 10 insert into #account select 1, 8, 10 insert into #account select 1, 9, 10 insert into #account select 1, 10, 10 insert into #account select 1, 11, 10 insert into #account select 1, 12, 10 go insert into #account select 2, 1, 20 insert into #account select 2, 2, 20 insert into #account select 2, 3, 20 insert into #account select 2, 4, 20 insert into #account select 2, 5, 20 insert into #account select 2, 6, 20 insert into #account select 2, 7, 20 insert into #account select 2, 8, 20 insert into #account select 2, 9, 20 insert into #account select 2, 10, 20 insert into #account select 2, 11, 20 go * So it contained the following data: select * from #account go acct month amt ----------- ----------- ----------- 1 1 10 1 2 10 1 3 10 1 4 10 1 5 10 1 6 10 1 7 10 1 8 10 1 9 10 1 10 10 1 11 10 1 12 10 2 1 20 2 2 20 2 3 20 2 4 20 2 5 20 2 6 20 2 7 20 2 8 20 2 9 20 2 10 20 2 11 20 * and you executed the following SQL: select acct, sum(amt * (1 - abs(sign(month - 1)))), sum(amt * (1 - abs(sign(month - 2)))), sum(amt * (1 - abs(sign(month - 3)))), sum(amt * (1 - abs(sign(month - 4)))), sum(amt * (1 - abs(sign(month - 5)))), sum(amt * (1 - abs(sign(month - 6)))), sum(amt * (1 - abs(sign(month - 7)))), sum(amt * (1 - abs(sign(month - 8)))), sum(amt * (1 - abs(sign(month - 9)))), sum(amt * (1 - abs(sign(month - 10)))), sum(amt * (1 - abs(sign(month - 11)))), sum(amt * (1 - abs(sign(month - 12)))) from #account group by acct * to achieve the same output: acct mth1 mth2 mth3 mth4 mth5 mth6 mth7 mth8 mth9 mth10 mth11 mth12 ----------- ----------- ----------- ----------- ----------- ----------- ------ ----- ----------- ----------- ----------- ----------- ----------- ----------- 1 10 10 10 10 10 10 10 10 10 10 10 10 2 20 20 20 20 20 20 20 20 20 20 20 NULL _________________________________________________________________ Q2.4: HOW TO PAD WITH LEADING ZEROS AN _INT_ OR _SMALLINT_. _________________________________________________________________ By example: declare @Integer int /* Good for positive numbers only. */ select @Integer = 1000 select "Positives Only" = right( replicate("0", 12) + convert(varchar, @Integer), 12) /* Good for positive and negative numbers. */ select @Integer = -1000 select "Both Signs" = substring( "- +", (sign(@Integer) + 2), 1) + right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12) select @Integer = 1000 select "Both Signs" = substring( "- +", (sign(@Integer) + 2), 1) + right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12) go Produces the following results: Positives Only -------------- 000000001000 Both Signs ------------- -000000001000 Both Signs ------------- +000000001000 _________________________________________________________________ Q2.5: DIVIDE BY ZERO AND NULLS _________________________________________________________________ During processing, if a divide by zero error occurs you will not get the answer you want. If you want the result set to come back and null to be displayed where divide by zero occurs do the following: 1> select * from total_temp 2> go field1 field2 ----------- ----------- 10 10 10 0 10 NULL (3 rows affected) 1> select field1, field1/(field2*convert(int, substring('1',1,abs(sign(field2))))) from total_temp 2> go field1 ----------- ----------- 10 1 10 NULL 10 NULL _________________________________________________________________ Q2.6: CONVERT MONTHS TO FINANCIAL MONTHS _________________________________________________________________ To convert months to financial year months (i.e. July = 1, Dec = 6, Jan = 7, June = 12 ) select ... ((sign(sign((datepart(month,GetDate())-6) * -1)+1) * (datepart(month, GetDate())+6)) + (sign(sign(datepart(month, GetDate())-7)+1) * (datepart(month, GetDate())-6))) ... from ... _________________________________________________________________ Q2.7: HIERARCHY TRAVERSAL - BOMS _________________________________________________________________ Alright, so you wanna know more about representing hierarchies in a relational database? Before I get in to the nitty gritty I should at least give all of the credit for this algorithm to: "_Hierarical_Structures:_The_Relational_Taboo!_, _(Can_ Transitive_Closure_Queries_be_Efficient?)_", by Michael J. Kamfonas as published in 1992 "Relational Journal" (I don't know which volume or issue). The basic algorithm goes like this, given a tree (hierarchy) that looks roughly like this (forgive the ASCII art--I hope you are using a fixed font to view this): a / \ / \ / \ b c / \ /|\ / \ / | \ / \ / | \ d e f | g Note, that the tree need not be balanced for this algorithm to work. The next step assigned two numbers to each node in the tree, called left and right numbers, such that the left and right numbers of each node contain the left and right numbers of the ancestors of that node (I'll get into the algorithm for assigning these left and right numbers later, but, _hint: use a depth-first search_): 1a16 / \ / \ / \ 2b7 8c15 / \ /|\ / \ / | \ / \ / | \ 3d4 5e6 9f10 11g12 13h14 Side Note: The careful observer will notice that these left and right numbers look an awful lot like a B-Tree index. So, you will notice that all of the children of node 'a' have left and right numbers between 1 and 16, and likewise all of the children of 'c' have left and right numbers between 8 and 15. In a slightly more relational format this table would look like: Table: hier node parent left_nbr right_nbr ----- ------ -------- --------- a NULL 1 16 b a 2 7 c a 8 15 d b 3 4 e b 5 6 f c 9 10 g c 11 12 h c 13 14 So, given a node name, say @node (in Sybase variable format), and you want to know all of the children of the node you can do: SELECT h2.node FROM hier h1, hier h2 WHERE h1.node = @node AND h2.left_nbr > h1.left_nbr AND h2.left_nbr If you had a table that contained, say, the salary for each node in your hierarchy (assuming a node is actually a individual in a company) you could then figure out the total salary for all of the people working underneath of @node by doing: SELECT sum(s.salary) FROM hier h1, hier h2, salary s WHERE h1.node = @node AND h2.left_nbr > h1.left_nbr AND h2.left_nbr Pretty cool, eh? And, conversly, if you wanted to know how much it cost to manage @node (i.e. the combined salary of all of the boss's of @node), you can do: SELECT sum(s.salary) FROM hier h1, hier h2, salary s WHERE h1.node = @node AND h2.left_nbr h1.right_nbr AND s.node = h2.node Now that you can see the algorithm in action everything looks peachy, however the sticky point is the method in which left and right numbers get assigned. And, unfortunately, there is no easy method to do this relationally (it can be done, it just ain't that easy). For an real- world application that I have worked on, we had an external program used to build and maintain the hierarchies, and it was this program's responsibility to assign the left and right numbers. But, in brief, here is the algorithm to assign left and right numbers to every node in a hierarchy. Note while reading this that this algorithm uses an array as a stack, however since arrays are not available in Sybase, they are (questionably) emulated using a temp table. DECLARE @skip int, @counter int, @idx int, @left_nbr int, @node varchar(10) /*-- Initialize variables --*/ SELECT @skip = 1000, /* Leave gaps in left & right numbers */ @counter = 0, /* Counter of next available left number */ @idx = 0 /* Index into array */ /* * The following table is used to emulate an array for Sybase, * for Oracle this wouldn't be a problem. :( */ CREATE TABLE #a ( idx int NOT NULL, node varchar(10) NOT NULL, left_nbr int NOT NULL ) /* * I know that I always preach about not using cursors, and there * are ways to get around it, but in this case I am more worried * about readability over performance. */ DECLARE root_cur CURSOR FOR SELECT h.node FROM hier h WHERE h.parent IS NULL FOR READ ONLY /* * Here we are populating our "stack" with all of the root * nodes of the hierarchy. We are using the cursor in order * to assign an increasing index into the "stack"...this could * be done using an identity column and a little trickery. */ OPEN root_cur FETCH root_cur INTO @node WHILE (@@sqlstatus = 0) BEGIN SELECT @idx = @idx + 1 INSERT INTO #a VALUES (@idx, @node, 0) FETCH root_cur INTO @node END CLOSE root_cur DEALLOCATE CURSOR root_cur /* * The following cursor will be employed to retrieve all of * the children of a given parent. */ DECLARE child_cur CURSOR FOR SELECT h.node FROM hier h WHERE h.parent = @node FOR READ ONLY /* * While our stack is not empty. */ WHILE (@idx > 0) BEGIN /* * Look at the element on the top of the stack. */ SELECT @node = node, @left_nbr = left_nbr FROM #a WHERE idx = @idx /* * If the element at the top of the stack has not been assigned * a left number yet, then we assign it one and copy its children * on the stack as "nodes to be looked at". */ IF (@left_nbr = 0) BEGIN /* * Set the left number of the current node to be @counter + @skip. * Note, we are doing a depth-first traversal, assigning left * numbers as we go. */ SELECT @counter = @counter + @skip UPDATE #a SET left_nbr = @counter WHERE idx = @idx /* * Append the children of the current node to the "stack". */ OPEN child_cur FETCH child_cur INTO @node WHILE (@@sqlstatus = 0) BEGIN SELECT @idx = @idx + 1 INSERT INTO #a VALUES (@idx, @node, 0) FETCH child_cur INTO @node END CLOSE child_cur END ELSE BEGIN /* * It turns out that the current node already has a left * number assigned to it, so we just need to assign the * right number and update the node in the actual * hierarchy. */ SELECT @counter = @counter + @skip UPDATE h SET left_nbr = @left_nbr, right_nbr = @counter WHERE h.node = @node /* * "Pop" the current node off our "stack". */ DELETE #a WHERE idx = @idx SELECT @idx = @idx - 1 END END /* WHILE (@idx > 0) */ DEALLOCATE CURSOR child_cur While reading through this, you should notice that assigning the left and right numbers to the entire hierarchy is very costly, especially as the size of the hierarchy grows. If you put the above code in an insert trigger on the hier table, the overhead for inserting each node would be phenominal. However, it is possible to reduce the overall cost of an insertion into the hierarchy. 1. By leaving huge gaps in the left & right numbers (using the @skip variable), you can reduce the circumstances in which the numbers need to be reassigned for a given insert. Thus, as long as you can squeeze a new node between an existing pair of left and right numbers you don't need to do the re-assignment (which could affect all of the node in the hierarchy). 2. By keeping an extra flag around in the hier table to indicate which nodes are leaf nodes (this could be maintained with a trigger as well), you avoid placing leaf nodes in the array and thus reduce the number of updates. Deletes on this table should never cause the left and right numbers to be re-assigned (you could even have a trigger automagically re-parent orphaned hierarchy nodes). All-in-all, this algorithm is very effective as long as the structure of the hierarchy does not change very often, and even then, as you can see, there are ways of getting around a lot of its inefficiencies. __________________________________________________________________________ -- Pablo Sanchez | Ph # (415) 933.3812 Fax # (415) 933.2821 pablo@sgi.com | Pg # (800) 930.5635 -or- pablo_p@corp.sgi.com =============================================================================== I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ] ---------------------------------------------------------------------- Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!harbinger.cc.monash.edu.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo From: pablo@sgi.com (Pablo Sanchez) Newsgroups: comp.databases.sybase,comp.answers,news.answers Subject: Sybase FAQ: 4/8 - section 3 Supersedes: Followup-To: comp.databases.sybase Date: 1 Jul 1996 14:30:26 GMT Organization: Silicon Graphics, Inc. Nederland, CO. USA Lines: 3085 Approved: news-answers-request@MIT.EDU Message-ID: References: Reply-To: pablo@sgi.com NNTP-Posting-Host: mew.corp.sgi.com Summary: Info about SQL Server, bcp, isql and other goodies Posting-Frequency: monthly Originator: pablo@mew.corp.sgi.com Xref: news1.ucsd.edu comp.databases.sybase:29424 comp.answers:15546 news.answers:62014 Archive-name: databases/sybase-faq/part4 URL: http://reality.sgi.com/pablo/Sybase_FAQ Q3.1: SYBASE SQL SERVER PERFORMANCE AND TUNING _________________________________________________________________ All Components Affect Response Time & Throughput We often think that high performance is defined as a fast data server, but the picture is not that simple. Performance is determined by all these factors: * The client application itself: + How efficiently is it written? + We will return to this later, when we look at application tuning. * The client-side library: + What facilities does it make available to the application? + How easy are they to use? * The network: + How efficiently is it used by the client/server connection? * The DBMS: + How effectively can it use the hardware? + What facilities does it supply to help build efficient fast applications? * The size of the database: + How long does it take to dump the database? + How long to recreate it after a media failure? Unlike some products which aim at performance on paper, Sybase aims at solving the multi-dimensional problem of delivering high performance for real applications. _OBJECTIVES_ To gain an overview of important considerations and alternatives for the design, development, and implementation of high performance systems in the Sybase client/server environment. The issues we will address are: * Client Application and API Issues * Physical Database Design Issues * Networking Issues * Operating System Configuration Issues * Hardware Configuration Issues * SQL Server Configuration Issues _Client Application and Physical Database Design design decisions will account for over 80% of your system's "tuneable" performance so ... plan your project resources accordingly ! _ It is highly recommended that every project include individuals who have taken Sybase Education's Performance and Tuning course. This 5-day course provides the hands-on experience essential for success. Client Application Issues * Tuning Transact-SQL Queries * Locking and Concurrency * ANSI Changes Affecting Concurrency * Application Deadlocking * Optimizing Cursors in v10 * Special Issues for Batch Applications * Asynchronous Queries * Generating Sequential Numbers * Other Application Issues Tuning Transact-SQL Queries * Learn the Strengths and Weaknesses of the Optimizer * One of the largest factors determining performance is TSQL! Test not only for efficient plans but also semantic correctness. * Optimizer will cost every permutation of accesses for queries involving 4 tables or less. Joins of more than 4 tables are "planned" 4-tables at a time (as listed in the FROM clause) so not all permutations are evaluated. You can influence the plans for these large joins by the order of tables in the FROM clause. * Avoid the following, if possible: + What are SARGS? This is short for search arguments. A search argument is essentially a constant value such as: o "My company name" o 3448 but not: o 344 + 88 o like "%what you want%" + Mathematical Manipulation of SARGs SELECT name FROM employee WHERE salary * 12 > 100000 + Use of Incompatible Datatypes Between Column and its _SARG_ Float &Int, Char &Varchar, Binary & Varbinary are Incompatible; Int &Intn (allow nulls) OK + Use of multiple "OR" Statements - especially on different columns in same table. If any portion of the OR clause requires a table scan, it will! OR Strategy requires additional cost of creating and sorting a work table. + Not using the leading portion of the index (unless the query is completely covered) + Substituting "OR" with "IN (value1, value2, ... valueN) Optimizer automatically converts this to an "OR" + Use of Non-Equal Expressions (!=) in WHERE Clause. * Use Tools to Evaluate and Tune Important/Problem Queries + Use the "set showplan on" command to see the plan chosen as "most efficient" by optimizer. Run all queries through during development and testing to ensure accurate access model and known performance. Information comes through the Error Handler of a DB-Library application. + Use the "dbcc traceon(3604, 302, 310)" command to see each alternative plan evaluated by the optimizer. Generally, this is only necessary to understand why the optimizer won't give you the plan you want or need (or think you need)! + Use the "set statistics io on" command to see the number of logical and physical i/o's for a query. Scrutinize those queries with high logical i/o's. + Use the "set statistics time on" command to see the amount of time (elapsed, execution, parse and compile) a query takes to run. + If the optimizer turns out to be a "pessimizer", use the "set forceplan on" command to change join order to be the order of the tables in the FROM clause. + If the optimizer refuses to select the proper index for a table, you can force it by adding the index id in parentheses after the table name in the FROM clause. SELECT * FROM orders(2), order_detail(1) WHERE ... _This may cause portability issues should index id's vary/change by site ! _ Locking and Concurrency * The Optimizer Decides on Lock Type and Granularity * Decisions on lock type (share, exclusive, or update) and granularity (page or table) are made during optimization so make sure your updates and deletes don't scan the table ! * Exclusive Locks are Only Released Upon Commit or Rollback * Lock Contention can have a large impact on both throughput and response time if not considered both in the application and database design ! * Keep transactions as small and short as possible to minimize blocking. Consider alternatives to "mass" updates and deletes such as a v10.0 cursor in a stored procedure which frequently commits. * Never include any "user interaction" in the middle of transactions. * Shared Locks Generally Released After Page is Read * Share locks "roll" through result set for concurrency. Only "HOLDLOCK" or "Isolation Level 3" retain share locks until commit or rollback. Remember also that HOLDLOCK is for read-consistency. It doesn't block other readers ! * Use optimistic locking techniques such as timestamps and the tsequal() function to check for updates to a row since it was read (rather than holdlock) ANSI Changes Affecting Concurrency * Chained Transactions Risk Concurrency if Behavior not Understood * Sybase defaults each DML statement to its own transaction if not specified ; * ANSI automatically begins a transaction with any SELECT, FETCH, OPEN, INSERT, UPDATE, or DELETE statement ; * If Chained Transaction must be used, extreme care must be taken to ensure locks aren't left held by applications unaware they are within a transaction! This is especially crucial if running at Level 3 Isolation * Lock at the Level of Isolation Required by the Query * Read Consistency is NOT a requirement of every query. * Choose level 3 only when the business model requires it * Running at Level 1 but selectively applying HOLDLOCKs as needed is safest * If you must run at Level 3, use the NOHOLDLOCK clause when you can ! * Beware of (and test) ANSI-compliant third-party applications for concurrency Application Deadlocking Prior to SQL Server 10 cursors, many developers simulated cursors by using two or more connections (dbproc's) and divided the processing between them. Often, this meant one connection had a SELECT open while "positioned" UPDATEs and DELETEs were issued on the other connection. The approach inevitably leads to the following problem: 1. Connection A holds a share lock on page X (remember "Rows Pending" on SQL Server leave a share lock on the "current" page). 2. Connection B requests an exclusive lock on the same page X and waits... 3. The APPLICATION waits for connection B to succeed before invoking whatever logic will remove the share lock (perhaps dbnextrow). Of course, that never happens ... Since Connection A never requests a lock which Connection B holds, this is NOT a true server-side deadlock. It's really an "application" deadlock ! Design Alternatives 1. Buffer additional rows in the client that are "nonupdateable". This forces the shared lock onto a page on which the application will not request an exclusive lock. 2. Re-code these modules with CT-Library cursors (aka. server-side cursors). These cursors avoid this problem by disassociating command structures from connection structures. 3. Re-code these modules with DB-Library cursors (aka. client-side cursors). These cursors avoid this problem through buffering techniques and re-issuing of SELECTs. Because of the re-issuing of SELECTs, these cursors are not recommended for high transaction sites ! Optimizing Cursors with v10.0 * Always Declare Cursor's Intent (i.e. Read Only or Updateable) * Allows for greater control over concurrency implications * If not specified, SQL Server will decide for you and usually choose updateable * Updateable cursors use UPDATE locks preventing other U or X locks * Updateable cursors that include indexed columns in the update list may table scan * SET Number of Rows for each FETCH * Allows for greater Network Optimization over ANSI's 1- row fetch * Rows fetched via Open Client cursors are transparently buffered in the client: FETCH -> Open Client < N rows Buffers * Keep Cursor Open on a Commit / Rollback * ANSI closes cursors with each COMMIT causing either poor throughput (by making the server re-materialize the result set) or poor concurrency (by holding locks) * Open Multiple Cursors on a Single Connection * Reduces resource consumption on both client and Server * Eliminates risk of a client-side deadlocks with itself Special Issues for Batch Applications SQL Server was not designed as a batch subsystem! It was designed as an RBDMS for large multi-user applications. Designers of batch-oriented applications should consider the following design alternatives to maximize performance : Design Alternatives : * Minimize Client/Server Interaction Whenever Possible * Don't turn SQL Server into a "file system" by issuing single table / single row requests when, in actuality, set logic applies. * Maximize TDS packet size for efficient Interprocess Communication (v10 only) * New SQL Server 10.0 cursors declared and processed entirely within stored procedures and triggers offer significant performance gains in batch processing. * Investigate Opportunities to Parallelize Processing * Breaking up single processes into multiple, concurrently executing, connections (where possible) will outperform single streamed processes everytime. * Make Use of TEMPDB for Intermediate Storage of Useful Data Asynchronous Queries Many, if not most, applications and 3rd Party tools are coded to send queries with the DB-Library call dbsqlexec( ) which is a synchronous call ! It sends a query and then waits for a response from SQL Server that the query has completed ! Designing your applications for asynchronous queries provides many benefits: 1. A "Cooperative" multi-tasking application design under Windows will allow users to run other Windows applications while your long queries are processed ! 2. Provides design opportunities to parallize work across multiple SQL Server connections. Implementation Choices: * System 10 Client Library Applications: * True asynchronous behaviour is built into the entire library. Through the appropriate use of call-backs, asynchronous behavior is the normal processing paradigm. * Windows DB-Library Applications (not true async but polling for data): * Use dbsqlsend(), dbsqlok(), and dbdataready() in conjunction with some additional code in WinMain() to pass control to a background process. Code samples which outline two different Windows programming approaches (a PeekMessage loop and a Windows Timer approach) are available in the Microsoft Software Library on Compuserve (GO MSL). Look for _SQLBKGD.ZIP_ * Non-PC DB-Library Applications (not true async but polling for data): * Use dbsqlsend(), dbsqlok(), and dbpoll() to utilize non-blocking functions. Generating Sequential Numbers Many applications use unique sequentially increasing numbers, often as primary keys. While there are good benefits to this approach, generating these keys can be a serious contention point if not careful. For a complete discussion of the alternatives, download Malcolm Colton's White Paper on Sequential Keys from the SQL Server Library of our OpenLine forum on Compuserve. The two best alternatives are outlined below. 1. "Primary Key" Table Storing Last Key Assigned + Minimize contention by either using a seperate "PK" table for each user table or padding out each row to a page. Make sure updates are "in-place". + Don't include the "PK" table's update in the same transaction as the INSERT. It will serialize the transactions. _BEGIN TRAN_ UPDATE pk_table SET nextkey = nextkey + 1 [WHERE table_name = @tbl_name] _COMMIT TRAN_ /* Now retrieve the information */ SELECT nextkey FROM pk_table WHERE table_name = @tbl_name] + "Gap-less" sequences require additional logic to store and retrieve rejected values 2. IDENTITY Columns (v10.0 only) + Last key assigned for each table is stored in memory and automatically included in all INSERTs (BCP too). This should be the method of choice for performance. + Choose a large enough numeric or else all inserts will stop once the max is hit. + Potential rollbacks in long transactions may cause gaps in the sequence ! Other Application Issues * Transaction Logging Can Bottleneck Some High Transaction Environments * Committing a Transaction Must Initiate a Physical Write for Recoverability * Implementing multiple statements as a transaction can assist in these environment by minimizing the number of log writes (log is flushed to disk on commits). * Utilizing the Client Machine's Processing Power Balances Load * Client/Server doesn't dictate that everything be done on Server! * Consider moving "presentation" related tasks such as string or mathematical manipulations, sorting, or, in some cases, even aggregating to the client. * Populating of "Temporary" Tables Should Use "SELECT _INTO"_ - balance this with dynamic creation of temporary tables in an OLTP environment. Dynamic creation may cause blocks in your tempdb. * "SELECT INTO" operations are not logged and thus are significantly faster than there INSERT with a nested SELECT counterparts. * Consider Porting Applications to Client Library Over Time * True Asynchronous Behavior Throughout Library * Array Binding for SELECTs * Dynamic SQL * Support for ClientLib-initiated callback functions * Support for Server-side Cursors * Shared Structures with Server Library (Open Server 10) Physical Database Design Issues * Normalized -vs- Denormalized Design * Index Selection * Promote "Updates-in-Place" Design * Promote Parallel I/O Opportunities Normalized -vs- Denormalized * Always Start with a Completely Normalized Database * Denormalization should be an optimization taken as a result of a performance problem * Benefits of a normalized database include : 1. Accelerates searching, sorting, and index creation since tables are narrower 2. Allows more clustered indexes and hence more flexibility in tuning queries, since there are more tables ; 3. Accelerates index searching since indexes tend to be narrower and perhaps shorter ; 4. Allows better use of segments to control physical placement of tables ; 5. Fewer indexes per table, helping UPDATE, INSERT, and DELETE performance ; 6. Fewer NULLs and less redundant data, increasing compactness of the database ; 7. Accelerates trigger execution by minimizing the extra integrity work of maintaining redundant data. 8. Joins are Generally Very Fast Provided Proper Indexes are Available 9. Normal caching and cindextrips parameter (discussed in Server section) means each join will do on average only 1-2 physical I/Os. 10. Cost of a logical I/O (get page from cache) only 1-2 milliseconds. There Are Some Good Reasons to Denormalize 1. All queries require access to the "full" set of joined data. 2. Majority of applications scan entire tables doing joins. 3. Computational complexity of derived columns require storage for SELECTs 4. Others ... Index Selection * Without a clustered index, all INSERTs and "out-of-place" UPDATEs go to the last page. The lock contention in high transaction environments would be prohibitive. This is also true for INSERTs to a clustered index on a monotonically increasing key. * High INSERT environments should always cluster on a key which provides the most "randomness" (to minimize lock / device contention) that is usable in many queries. Note this is generally not your primary key ! * Prime candidates for clustered index (in addition to the above) include : + Columns Accessed by a Range + Columns Used with Order By, Group By, or Joins * Indexes Help SELECTs and Hurt INSERTs * Too many indexes can significantly hurt performance of INSERTs and "out-of-place" UPDATEs. * Prime candidates for nonclustered indexes include : + Columns Used in Queries Requiring Index Coverage + Columns Used to Access Less than 20% (rule of thumb) of the Data. * Unique indexes should be defined as UNIQUE to help the optimizer * Minimize index page splits with Fillfactor (helps concurrency and minimizes deadlocks) * Keep the Size of the Key as Small as Possible * Accelerates index scans and tree traversals * Use small datatypes whenever possible . Numerics should also be used whenever possible as they compare faster than strings. Promote "Update-in-Place" Design * "Update-in-Place" Faster by Orders of Magnitude * Performance gain dependent on number of indexes. Recent benchmark (160 byte rows, 1 clustered index and 2 nonclustered) showed 800% difference! * Alternative ("Out-of-Place" Update) implemented as a physical DELETE followed by a physical INSERT. These tactics result in: 1. Increased Lock Contention 2. Increased Chance of Deadlock 3. Decreased Response Time and Throughput * Currently (System 10 and below), Rules for "Update-in-Place" Behavior Include : 1. Columns updated can not be variable length or allow nulls 2. Columns updated can not be part of an index used to locate the row to update 3. No update trigger on table being updated (because the inserted and deleted tables used in triggers get their data from the log) In v4.9.x and below, only one row may be affected and the optimizer must know this in advance by choosing a UNIQUE index. System 10 eliminated this limitation. Promote Parallel I/O Opportunities * For I/O-bound Multi-User Systems, Use A lot of Logical and Physical Devices * Plan balanced separation of objects across logical and physical devices. * Increased number of physical devices (including controllers) ensures physical bandwidth * Increased number of logical Sybase devices ensures minimal contention for internal resources. Look at SQL Monitor's Device I/O Hit Rate for clues. Also watch out for the 128 device limit per database. * Create Database (in v10) starts parallel I/O on up to 6 devices at a time concurrently. If taken advantage of, expect an 800% performance gain. A 2Gb TPC-B database that took 4.5 hours under 4.9.1 to create now takes 26 minutes if created on 6 independent devices ! * Use Sybase Segments to Ensure Control of Placement This is the only way to guarantee logical seperation of objects on devices to reduce contention for internal resources. * Dedicate a seperate physical device and controller to the transaction log in tempdb too. * optimize TEMPDB Also if Heavily Accessed * increased number of logical Sybase devices ensures minimal contention for internal resources. * systems requiring increased log throughput today must partition database into separate databases Breaking up one logical database into multiple smaller databases increases the number number of transaction logs working in parallel. Networking Issues * Choice of Transport Stacks * Variable Sized TDS Packets * TCP/IP Packet Batching Choice of Transport Stacks for PCs * Choose a Stack that Supports "Attention Signals" (aka. "Out of Band Data") * Provides for the most efficient mechanism to cancel queries. * Essential for sites providing ad-hoc query access to large databases. * Without "Attention Signal" capabilities (or the urgent flag in the connection string), the DB-Library functions DBCANQUERY ( ) and DBCANCEL ( ) will cause SQL Server to send all rows back to the Client DB-Library as quickly as possible so as to complete the query. This can be very expensive if the result set is large and, from the user's perspective, causes the application to appear as though it has hung. * With "Attention Signal" capabilities, Net-Library is able to send an out-of-sequence packet requesting the SQL Server to physically throw away any remaining results providing for instantaneous response. * Currently, the following network vendors and associated protocols support the an "Attention Signal" capable implementation: 1. NetManage NEWT 2. FTP TCP 3. Named Pipes (10860) - Do not use urgent parameter with this Netlib 4. Novell LAN Workplace v4.1 0 Patch required from Novell 5. Novell SPX - Implemented internally through an "In-Band" packet 6. Wollongong Pathway 7. Microsoft TCP - Patch required from Microsoft Variable-sized TDS Packets Pre-v4.6 TDS Does Not Optimize Network Performance Current SQL Server TDS packet size limited to 512 bytes while network frame sizes are significantly larger (1508 bytes on Ethernet and 4120 bytes on Token Ring). The specific protocol may have other limitations! For example: * IPX is limited to 576 bytes in a routed network. * SPX requires acknowledgement of every packet before it will send another. A recent benchmark measured a 300% performance hit over TCP in "large" data transfers (small transfers showed no difference). * Open Client Apps can "Request" a Larger Packet Shown to have significant performance improvement on "large" data transfers such as BCP, Text / Image Handling, and Large Result Sets. + clients: o isql -Usa -Annnnn o bcp -Usa -Annnnn o ct_con_props (connection, CS_SET, CS_PACKETSIZE, &packetsize, sizeof(packetsize), NULL) + An "SA" must Configure each Servers' Defaults Properly o sp_configure "default packet size", nnnnn - Sets default packet size per client connection (defaults to 512) o sp_configure "maximum packet size", nnnnn - Sets maximum TDS packet size per client connection (defaults to 512) o sp_configure "additional netmem", nnnnn - Additional memory for large packets taken from separate pool. This memory does not come from the sp_configure memory setting. Optimal value = ((# connections using large packets large packetsize * 3) + an additional 1-2% of the above calculation for overhead) Each connection using large packets has 3 network buffers: one to read; one to write; and one overflow. # Default network memory - Default-sized packets come from this memory pool. # Additional Network memory - Big packets come this memory pool. If not enough memory is available in this pool, the server will give a smaller packet size, down to the default TCP/IP Packet Batching * TCP Networking Layer Defaults to "Packet Batching" * This means that TCP/IP will batch small logical packets into one larger physical packet by briefly delaying packets in an effort to fill the physical network frames (Ethernet, Token-Ring) with as much data as possible. * Designed to improve performance in terminal emulation environments where there are mostly only keystrokes being sent across the network. * Some Environments Benefit from Disabling Packet Batching * Applies mainly to socket-based networks (BSD) although we have seen some TLI networks such as NCR's benefit. * Applications sending very small result sets or statuses from sprocs will usually benefit. Benchmark with your own application to be sure. * This makes SQL Server open all connections with the TCP_NODELAY option. Packets will be sent regardless of size. * To disable packet batching, in pre-Sys 11, start SQL Server with the 1610 Trace Flag. $SYBASE/dataserver -T1610 -d /usr/u/sybase/master.dat ... Your errorlog will indicate the use of this option with the message: SQL Server booted with TCP_NODELAY enabled. Operating System Issues * Never Let SQL Server Page Fault * It is better to configure SQL Server with less memory and do more physical database I/O than to page fault. OS page faults are synchronous and stop the entire dataserver engine until the page fault completes. Since database I/O's are asynchronous, other user tasks can continue! * Use Process Affinitying in SMP Environments, if Supported * Affinitying dataserver engines to specific CPUs minimizes overhead associated with moving process information (registers, etc) between CPUs. Most implementations will preference other tasks onto other CPUs as well allowing even more CPU time for dataserver engines. * Watch out for OS's which are not fully symmetric. Affinitying dataserver engines onto CPUs that are heavily used by the OS can seriously degrade performance. Benchmark with your application to find optimal binding. * Increase priority of dataserver engines, if supported * Give SQL Server the opportunity to do more work. If SQL Server has nothing to do, it will voluntarily yield the CPU. * Watch out for OS's which externalize their async drivers. They need to run too! * Use of OS Monitors to Verify Resource Usage * The OS CPU monitors only "know" that an instruction is being executed. With SQL Server's own threading and scheduling, it can routinely be 90% idle when the OS thinks its 90% busy. SQL Monitor shows real CPU usage. * Look into high disk I/O wait time or I/O queue lengths. These indicate physical saturation points in the I/O subsystem or poor data distribution. * Disk Utilization above 50% may be subject to queuing effects which often manifest themselves as uneven response times. * Look into high system call counts which may be symptomatic of problems. * Look into high context switch counts which may also be symptomatic of problems. * Optimize your kernel for SQL Server (minimal OS file buffering, adequate network buffers, appropriate KEEPALIVE values, etc). * Use OS Monitors and SQL Monitor to Determine Bottlenecks * Most likely "Non-Application" contention points include: Resource Where to Look --------- -------------- CPU Performance SQL Monitor - CPU and Trends Physical I/O Subsystem OS Monitoring tools - iostat, sar... Transaction Log SQL Monitor - Device I/O and Device Hit Rate on Log Device SQL Server Network Polling SQL Monitor - Network and Benchmark Baselines Memory SQL Monitor - Data and Cache Utilization * Use of Vendor-support Striping such as LVM and RAID * These technologies provide a very simple and effective mechanism of load balancing I/O across physical devices and channels. * Use them provided they support asynchronous I/O and reliable writes. * These approaches do not eliminate the need for Sybase segments to ensure minimal contention for internal resources. * Non-read-only environments should expect performance degradations when using RAID levels other than level 0. These levels all include fault tolerance where each write requires additional reads to calculate a "parity" as well as the extra write of the parity data. Hardware Configuration Issues * Number of CPUs * Use information from SQL Monitor to assess SQL Server's CPU usage. * In SMP environments, dedicate at least one CPU for the OS. * Advantages and scaling of VSA is application-dependent. VSA was architected with large multi-user systems in mind. * I/O Subsystem Configuration * Look into high Disk I/O Wait Times or I/O Queue Lengths. These may indicate physical I/O saturation points or poor data distribution. * Disk Utilization above 50% may be subject to queuing effects which often manifest themselves as uneven response times. * Logical Volume configurations can impact performance of operations such as create database, create index, and bcp. To optimize for these operations, create Logical Volumes such that they start on different channels / disks to ensure I/O is spread across channels. * Discuss device and controller throughput with hardware vendors to ensure channel throughput high enough to drive all devices at maximum rating. General SQL Server Tuning * Changing Values with sp_configure or buildmaster _It is imperative that you only use sp_configure to change those parameters that it currently maintains because the process of reconfiguring actually recalculates a number of other buildmaster parameters. Using the Buildmaster utility to change a parameter "managed" by sp_configure may result in a mis-configured server and cause adverse performance or even worse ... _ * Sizing Procedure Cache + SQL Server maintains an MRU-LRU chain of stored procedure query plans. As users execute sprocs, SQL Server looks in cache for a query plan to use. However, stored procedure query plans are currently not re-entrant! If a query plan is available, it is placed on the MRU and execution begins. If no plan is in memory, or if all copies are in use, a new copy is read from the sysprocedures table. It is then optimized and put on the MRU for execution. + Use dbcc memusage to evaluate the size and number of each sproc currently in cache. Use SQL Monitor's cache statistics to get your average cache hit ratio. Ideally during production, one would hope to see a high hit ratio to minimize the procedure reads from disk. Use this information in conjuction with your desired hit ratio to calculate the amount of memory needed. * Memory + Tuning memory is more a price/performance issue than anything else ! The more memory you have available, the greater than probability of minimizing physical I/O. This is an important goal though. Not only does physical I/O take significantly longer, but threads doing physical I/O must go through the scheduler once the I/O completes. This means that work on behalf of the thread may not actually continue to execute for quite a while ! + There are no longer (as of v4.8) any inherent limitations in SQL Server which cause a point of diminishing returns on memory size. + Calculate Memory based on the following algorithm : Total Memory = Dataserver Executable Size (in bytes) + Static Overhead of 1 Mb + User Connections x 40,960 bytes + Open Databases x 644 bytes + Locks x 32 bytes + Devices x 45,056 bytes + Procedure Cache + Data Cache * Recovery Interval + As users change data in SQL Server, only the transaction log is written to disk right away for recoverability. "Dirty" data and index pages are kept in cache and written to disk at a later time. This provides two major benefits: 1. Many transactions may change a page yet only one physical write is done 2. SQL Server can schedule the physical writes "when appropriate" + SQL Server must eventually write these "dirty" pages to disk. + A checkpoint process wakes up periodically and "walks" the cache chain looking for dirty pages to write to disk + The recovery interval controls how often checkpoint writes dirty pages. * Tuning Recovery Interval + A low value may cause unnecessary physical I/O lowering throughput of the system. Automatic recovery is generally much faster during boot-up. + A high value minimizes unnecessary physical I/O and helps throughput of the system. Automatic recovery may take substantial time during boot-up. Audit Performance Tuning for v10.0 * Potentially as Write Intensive as Logging * Isolate Audit I/O from other components. * Since auditing nearly always involves sequential writes, RAID Level 0 disk striping or other byte-level striping technology should provide the best performance (theoretically). * Size Audit Queue Carefully * Audit records generated by clients are stored in an in memory audit queue until they can be processed. * Tune the queue's size with sp_configure "audit queue size", nnnn (in rows). * Sizing this queue too small will seriously impact performance since all user processes who generate audit activity will sleep if the queue fills up. * Size Audit Database Carefully * Each audit row could require up to 416 bytes depending on what is audited. * Sizing this database too small will seriously impact performance since all user processes who generate audit activity will sleep if the database fills up. _________________________________________________________________ Q3.2: TEMP TABLES AND OLTP _________________________________________________________________ Our shop would like to inform folks of a potential problem when using _temporary tables in an OLTP environment._ Using temporary tables dynamically in a OLTP production environment may result in blocking (single-threading) as the number of transactions using the temporary tables increases. Does it affect my application? This warning only applies for SQL, that is being invoked frequently in an OLTP production environment, where the use of _"select into..." or "create table #temp"_ is common. Application using temp tables may experience blocking problems as the number of transactions increases. This warning does not apply to SQL that may be in a report or that is not used frequently. _Frequently_ is defined as several times per second. Why? Why? Why? Our shop was working with an application owner to chase down a problem they were having during peak periods. The problem they were having was severe blocking in tempdb. What was witnessed by the DBA group was that as the number of transactions increased on this particular application, the number of blocks in tempdb also increased. We ran some independent tests to simulate a heavily loaded server and discovered that the data pages in contention were in tempdb's _syscolumns'_ table. This actually makes sense because during table creation entries are added to this table, regardless if it's a temporary or permanent table. We ran another simulation where we created the tables before the stored procedure used it and the blocks went away. We then performed an additional test to determine what impact creating temporary tables dynamically would have on the server and discovered that there is a 33% performance gain by creating the tables once rather than re-creating them. Your mileage may vary. How do I fix this? To make things better, do the 90's thing -- _reduce and reuse your temp tables._ During one application connection/session, aim to create the temp tables only once. Let's look at the lifespan of a temp table. If temp tables are created in a batch within a connection, then all future batches and stored procs will have access to such temp tables until they're dropped; this is the reduce and reuse strategy we recommend. However, if temp tables are created in a stored proc, then the database will drop the temp tables when the stored proc ends, and this means repeated and multiple temp table creations; you want to avoid this. Recode your stored procedures so that they assume that the temporary tables already exist, and then alter your application so that it creates the temporary tables at start-up -- once and not every time the stored procedure is invoked. That's it! Pretty simple eh? Summary The upshot is that you can realize roughly a 33% performance gain and not experience the blocking which is difficult to quantify due to the specificity of each application. Basically, you cannot lose. Solution in pseudo-code If you have an application that creates the same temp table many times within one connection, here's how to convert it to reduce and reuse temp table creations. Raymond Lew has supplied a detailed example for trying this. Old open connection loop until time to go exec procedure vavoom_often /* vavoom_often creates and uses #gocart for every call */ /* eg: select * into #gocart from gocart */ go . . . loop-end close connection New open connection /* Create the temporary table outside of the sproc */ select * into #gocart from gocart where 1 =2 ; go loop until time to go exec procedure vavoom_often /* vavoom_often reuses #gocart which */ /* was created before exec of vavoom_often */ /* - First statement may be a truncate table #gocart */ /* - Execute _with recompile_ */ /* if your table will have more than 10 data pages */ /* as the optimizer will assume 10 data pages for temp tables */ go . . . loop-end close connection Note that it is necessary to call out the code to create the table and it becomes a pain in the butt because the create-table statement will have to be replicated in any stored proc and in the initialization part of the application - this can be a maintenance nuisance. This can be solved by using any macro package such as _m4_ or _cpp_. or by using and adapting the scripts from Raymond Lew. _________________________________________________________________ From: Raymond Lew At our company, we try to keep the database and the application loosely coupled to allow independent changes at the frontend or the backend as long as the interface stays the same. Embedding temp table definitions in the frontend would make this more difficult. To get away from having to embed the temp table definitions in the frontend code, we are storing the temp table definitions in the database. The frontend programs retrieve the definitions and declare the tables dynamically at the beginning of each session. This allows for the change of backend procedures without changes in the frontend when the API does not change. Enclosed below are three scripts. The first is an isql script to create the tables to hold the definitions. The second is a shell script to set up a sample procedure named vavoom. The third is shell script to demonstrate the structure of application code. I would like to thank Charles Forget and Gordon Rees for their assistance on these scripts. --start of setup------------------------------------------------------ /* Raymond Lew - 1996-02-20 */ /* This isql script will set up the following tables: gocart - sample table app_temp_defn - where temp table definitions are stored app_temp_defn_group - a logical grouping of temp table definitions for an application function */ /******************************/ /* gocart table - sample table*/ /******************************/ drop table gocart go create table gocart ( cartname char(10) null ,cartcolor char(30) null ) go create unique clustered index gocart1 on gocart (cartname) go insert into gocart values ('go1','blue ') insert into gocart values ('go2','pink ') insert into gocart values ('go3','green ') insert into gocart values ('go4','red ') go /****************************************************************/ /* app_temp_defn - definition of temp tables with their indexes */ /****************************************************************/ drop table app_temp_defn go create table app_temp_defn ( /* note: temp tables are unique only in first 13 chars */ objectname char(20) not null ,seq_no smallint not null ,defntext char(255) not null ) go create unique clustered index app_temp_defn1 on app_temp_defn (objectname,seq_no) go insert into app_temp_defn values ('#gocart',1,'select * into #gocart') insert into app_temp_defn values ('#gocart',2,' from gocart where 1=2 ') go insert into app_temp_defn values ('#gocartindex',1, "create unique index gocartindex on #gocart (cartname) ") go insert into app_temp_defn values ('#gocart1',1, 'select * into #gocart1 from gocart where 1=2') go /***********************************************************************/ /* app_temp_defn_group - groupings of temp definitions by applications */ /***********************************************************************/ drop table app_temp_defn_group go create table app_temp_defn_group ( appname char(8) not null ,objectname char(20) not null ) go create unique clustered index app_temp_defn_group1 on app_temp_defn_group (appname,objectname) go insert into app_temp_defn_group values('abc','#gocart') insert into app_temp_defn_group values('abc','#gocartindex') go /***********************************************************/ /* get_temp_defn - proc for getting the temp defn by group */ /***********************************************************/ drop procedure get_temp_defn go create procedure get_temp_defn ( @appname char(8) ) as if @appname = '' select defntext from app_temp_defn order by objectname, seq_no else select defntext from app_temp_defn a , app_temp_defn_group b where a.objectname = b.objectname and b.appname = @appname order by a.objectname, a.seq_no return go /* let's try some tests */ exec get_temp_defn '' go exec get_temp_defn 'abc' go --end of setup -------------------------------------------------- --- start of make.vavoom -------------------------------------------- #!/bin/sh # Raymond Lew - 1996-02-20 # # bourne shell script for creating stored procedures using # app_temp_defn table # # demo procedure vavoom created here # # note: you have to change the passwords, id and etc. for your site # note: you might have to some inline changes to make this work # check out the notes within the body # get the table defn's into a text file # # note: next line :you will need to end the line immediately after eot \ isql -Ukryten -Pjollyguy -Sstarbug -w255 tabletext exec get_temp_defn '' go eot # note: prev line :you will need to have a newline immediately after eot # go mess around in vi vi tabletext # # create the proc vavoom after running the temp defn's into db # isql -Ukryten -Pjollyguy -Sstarbug -e tabletext exec get_temp_defn '' go eot # note: prev line :you will need to have a newline immediately after eot # go mess around in vi vi tabletext isql -Ukryten -Pjollyguy -Sstarbug -e /* send the output to the screen instead of errorlog */ 2> dbcc traceon(3604) 3> go 1> dbcc memusage 2> go Memory Usage: Meg. 2K Blks Bytes Configured Memory:300.0000 153600 314572800 Code size: 2.6375 1351 2765600 Kernel Structures: 77.6262 39745 81396975 Server Structures: 54.4032 27855 57045920 Page Cache:129.5992 66355 135894640 Proc Buffers: 1.1571 593 1213340 Proc Headers: 25.0840 12843 26302464 Number of page buffers: 63856 Number of proc buffers: 15964 Buffer Cache, Top 20: DB Id Object Id Index Id 2K Buffers 6 927446498 0 9424 6 507969006 0 7799 6 959446612 0 7563 6 116351649 0 7428 6 2135014687 5 2972 6 607445358 0 2780 6 507969006 2 2334 6 2135014687 0 2047 6 506589013 0 1766 6 1022066847 0 1160 6 116351649 255 987 6 927446498 8 897 6 927446498 10 733 6 959446612 7 722 6 506589013 1 687 6 971918604 0 686 6 116351649 6 387 Procedure Cache, Top 20: Database Id: 6 Object Id: 1652357121 Object Name: lp_cm_case_list Version: 1 Uid: 1 Type: stored procedure Number of trees: 0 Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages Number of plans: 16 Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages ---- Database Id: 6 Object Id: 1668357178 Object Name: lp_cm_subcase_list Version: 1 Uid: 1 Type: stored procedure Number of trees: 0 Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages Number of plans: 10 Size of plans: 0.202827 Mb, 212680.000000 bytes, 110 pages ---- Database Id: 6 Object Id: 132351706 Object Name: csp_get_case Version: 1 Uid: 1 Type: stored procedure Number of trees: 0 Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages Number of plans: 9 Size of plans: 0.149792 Mb, 157068.000000 bytes, 81 pages ---- Database Id: 6 Object Id: 1858261845 Object Name: lp_get_last_caller_new Version: 1 Uid: 1 Type: stored procedure Number of trees: 0 Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages Number of plans: 2 Size of plans: 0.054710 Mb, 57368.000000 bytes, 30 pages ... 1> /* redirect output back to the errorlog */ 2> dbcc traceoff(3604) 3> go Dissecting memusage output The output may appear overwhelming but it's actually pretty easy to parse. Let's look at each section. Memory Usage This section provides a breakdown of the memory configured for the SQL Server. Memory Usage: Meg. 2K Blks Bytes Configured Memory:300.0000 153600 314572800 Code size: 2.6375 1351 2765600 Kernel Structures: 77.6262 39745 81396975 Server Structures: 54.4032 27855 57045920 Page Cache:129.5992 66355 135894640 Proc Buffers: 1.1571 593 1213340 Proc Headers: 25.0840 12843 26302464 Number of page buffers: 63856 Number of proc buffers: 15964 The Configured Memory does not equal the sum of the individual components. It does in the sybooks example but in practice it doesn't always. This is not critical and it is simply being noted here. The Kernel Structures and Server structures are of mild interest. They can be used to cross-check that the pre-allocation is what you believe it to be. The salient line items are Number of page buffers and Number of proc buffers. The Number of proc buffers translates directly to the number of 2K pages available for the procedure cache. The Number of page buffers is the number of 2K pages available for the buffer cache. As a side note and not trying to muddle things, these last two pieces of information can also be obtained from the errorlog: ... Number of buffers in buffer cache: 63856. ... Number of proc buffers allocated: 15964. In our example, we have 15,964 2K pages (~32MB) for the procedure cache and 63,856 2K pages (~126MB) for the buffer cache. Buffer Cache The buffer cache contains the data pages that the SQL Server will be either flushing to disk or transmitting to a user connection. If this area is too small, the SQL Server must flush 2K pages sooner than might be necessary to satisfy a user connection's request. For example, in most database applications there are small edit tables that are used frequently by the application. These tables will populate the buffer cache and normally will remain resident during the entire life of the SQL Server. This is good because a user connection may request validation and the SQL Server will find the data page(s) resident in memory. If however there is insufficient memory configured, then these small tables will be flushed out of the buffer cache in order to satisfy another query. The next time a validation is requested, the tables will have to be re-read from disk in order to satisfy the request. Your performance will degrade. Memory access is easily an order of magnitude faster than performing a physical I/O. In this example we know from the previous section that we have 63,856 2K pages (or buffers) available in the buffer cache. The question to answer is, "do we have sufficient buffer cache configured?" The following is the output of the dbcc memusage regarding the buffer cache: Buffer Cache, Top 20: DB Id Object Id Index Id 2K Buffers 6 927446498 0 9424 6 507969006 0 7799 6 959446612 0 7563 6 116351649 0 7428 6 2135014687 5 2972 6 607445358 0 2780 6 507969006 2 2334 6 2135014687 0 2047 6 506589013 0 1766 6 1022066847 0 1160 6 116351649 255 987 6 927446498 8 897 6 927446498 10 733 6 959446612 7 722 6 506589013 1 687 6 971918604 0 686 6 116351649 6 387 Index Legend Value Definition 0 Table data 1 Clustered index 2-250 Nonclustered indexes 255 Text pages * To translate the DB Id use select db_name(#) to map back to the database name. * To translate the Object Id, use the respective database and use the select object_name(#) command. It's obvious that the first 10 items take up the largest portion of the buffer cache. Sum these values and compare the result to the amount of buffer cache configured. Summing the 10 items nets a result of 45,263 2K data pages. Comparing that to the number of pages configured, 63,856, we see that this SQL Server has sufficient memory configured. When do I need more Buffer Cache? I follow the following rules of thumb to determine when I need more buffer cache: * If the sum of all the entries reported is equal to the number of pages configurd and all entries are relatively the same size. Crank it up. * Note the natural groupings that occur in the example. If the difference between any of the groups is greater than an order of magnitude I'd be suspicious. But only if the sum of the larger groups is very close to the number of pages configured. Procedure Cache If the procedure cache is not of sufficient size you may get sporadic 701 errors: There is insufficient system memory to run this query. In order to calculate the correct procedure cache one needs to apply the following formula (found in SQL Server Troubleshooting Guide - Chapter 2, Procedure Cache Sizing): proc cache size = max(# of concurrent users) * (size of the largest plan) * 1.25 The flaw with the above formula is that if 10% of the users are executing the largest plan, then you'll overshoot. If you have distinct classes of connections whose largest plans are mutually exclusive then you need to account for that: ttl proc cache = proc cache size * x% + proc cache size * y% ... The max(# of concurrent users) is not the number of user connections configured but rather the actual number of connections during the peak period. To compute the size of the largest [query] plan take the results from the dbcc memusage's, Procedure Cache section and apply the following formula: query plan size = [size of plans in bytes] / [number of plans] We can compute the size of the query plan for lp_cm_case_list by using the output of the dbcc memusage: ... Database Id: 6 Object Id: 1652357121 Object Name: lp_cm_case_list Version: 1 Uid: 1 Type: stored procedure Number of trees: 0 Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages Number of plans: 16 Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages ---- ... Entering the respective numbers, the query plan size for lp_cm_case_list is 21K: query plan size = 339072 / 16 query plan size = 21192 bytes or 21K The formula would be applied to all objects found in the procedure cache and the largest value would be plugged into the procedure cache size formula: Query Plan Sizes Query Object Plan Size lp_cm_case_list 21K lp_cm_subcase_list 21K csp_get_case 19K lp_get_last_caller_new 28K The size of the largest [query] plan is 28K. Entering these values into the formula: proc cache size = max(# of concurrent users) * (size of the largest plan) * 1.25 proc cache size = 491 connections * 28K * 1.25 proc cache size = 17,185 2K pages required Our example SQL Server has 15,964 2K pages configured but 17,185 2K pages are required. This SQL Server can benefit by having more procedure cache configured. This can be done one of two ways: 1. If you have some headroom in your buffer cache, then sp_configure "procedure cache" to increase the ratio of procedure cache to buffer cache or procedure cache = [ proposed procedure cache ] / ( [ current procedure cache ] + [ current buffer cache ] ) The new procedure cache would be 22%: procedure cache = 17,185 / ( 15,964 + 63,856 ) procedure cache = .2152 or 22% 2. If the buffer cache cannot be shrunken, then sp_configure "memory" to increase the total memory: mem size = ([ proposed procedure cache ]) / ([ current procedure cache ] / [ current configured memory ]) The new memory size would be 165,399 2K pages, assuming that the procedure cache is unchanged: mem size = 17,185 / ( 15,964 / 153,600 ) mem size = 165,399 2K pages ---------------------------------------------------------------------------- Q3.8: WHY SHOULD I USE _STORED PROCEDURES_? _________________________________________________________________ There are many advantages to using stored procedures (unfortunately they do not handle the _text/image_ types): * Security - you can revoke access to the base tables and only allow users to access and manipulate the data via the stored procedures. * Performance - stored procedures are parsed and a query plan is compiled. This information is stored in the system tables and it only has to be done once. * Network - if you have users who are on a WAN (slow connection) having stored procedues will improve throughput because less bytes need to flow down the wire from the client to the SQL server. * Tuning - if you have all your SQL code housed in the database, then it's easy to tune the stored procedure without affecting the clients (unless of course the parameters change). * Modularity - during application development, the application designer can concentrate on the front-end and the DB designer can concentrate on the SQL Server. _________________________________________________________________ Q3.9: YOU AND _SHOWPLAN_ OUTPUT _________________________________________________________________ Microsoft SQL Server includes a very intelligent cost-based query optimizer which, given an ad-hoc query, can quickly determine the best access method for retrieving the data, including the order in which to join tables and whether or not to use indexes that may be on those tables. By using a cost-based query optimizer, the System Administrator or end user is released from having to determine the most efficient way of structuring the query to get optimal performance -- instead, the optimizer looks at all possible join orders, and the cost of using each index, and picks the plan with the least cost in terms of page I/O's. Detailed information on the final access method that the optimizer chooses can be displayed for the user by executing the Transact-SQL "SET SHOWPLAN ON" command. This command will show each step that the optimizer uses in joining tables and which, if any, indexes it chooses to be the least-cost method of accessing the data. This can be extremely beneficial when analyzing certain queries to determine if the indexes that have been defined on a table are actually being considered by the optimizer as useful in getting to the data. This document will define and explain each of the output messages from SHOWPLAN, and give example queries and the output from SHOWPLAN to illustrate the point. The format will be consistent throughout: a heading which corresponds to the exact text of a SHOWPLAN statement, followed by a description of what it means, a sample query which generates that particular message, and the full output from executing the query with the SHOWPLAN option on. Wherever possible, the queries will use the existing tables and indexes, unaltered, from the SQL Server "Pubs" sample database. STEP n This statement will be included in the SHOWPLAN output for every query, where n is an integer, beginning with "STEP 1". For some queries, SQL Server cannot effectively retrieve the results in a single step, and must break the query plan into several steps. For example, if a query includes a GROUP BY clause, the query will need to be broken into at least two steps: one step to select the qualifying rows from the table, and another step to group them. The following query demonstrates a singlestep query. Query: SELECT au_lname, au_fname FROM Authors WHERE city = "Oakland" SHOWPLAN: STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Table Scan The type of query is SELECT (into a worktable) This SHOWPLAN statement indicates that SQL Server needs to insert some of the query results into an intermediate worktable, and later in the query processing will then select the values out of that table. This is most often seen with a query which involves a GROUP BY clause, as the results are first put into a work table, and then the qualifying rows in the work table are grouped based on the given column in the GROUP BY clause. The following query returns a list of all cities and indicates the number of authors that live in each city. The query plan is composed of two steps: the first step selects the rows into a worktable, and the second step retrieves the grouped rows from the worktable: Query: SELECT city, total_authors = count(*) FROM Authors GROUP BY city SHOWPLAN: STEP 1 The type of query is SELECT (into a worktable) GROUP BY Vector Aggregate FROM TABLE authors Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT FROM TABLE Worktable Nested iteration Table Scan The type of query is This statement describes the type of query for each step. For most user queries, the value for will be SELECT, INSERT, UPDATE, or DELETE. If SHOWPLAN is turned on while other commands are issued, the will reflect the command that was issued. The following examples show various outputs for different queries/commands: Query 1: CREATE TABLE Mytab (col1 int) SHOWPLAN 1: STEP 1 The type of query is TABCREATE Query 2: INSERT Publishers VALUES ("9904", "NewPubs", "Seattle", "WA") SHOWPLAN 2: STEP 1 The type of query is INSERT The update mode is direct Table Scan TO TABLE publishers The update mode is deferred There are two methods or "modes" that SQL Server can use to perform update operations such as INSERT, DELETE, UPDATE, and SELECT INTO. These methods are called deferred update and direct update. When the deferred method is used, the changes are applied to all rows of the table by making log records in the transaction log to reflect the old and new value of the column(s) being modified (in the case of UPDATE operations), or the values which will be inserted or deleted (in the case of INSERT and DELETE, respectively). When all of the log records have been constructed, the changes are then applied to the data pages. This method generates more log records than a direct update (discussed later), but it has the advantage of allowing the execution of commands which may cascade changes throughout a table. For example, consider a table which has a column "col1" with a unique index on it, and data values numbered consecutively from 1 to 100 in that column. Assume an UPDATE statement is executed to increase the value in each row by 1: Query 1: UPDATE Mytable SET col1 = col1 + 1 SHOWPLAN 1: STEP 1 The type of query is UPDATE The update mode is deferred FROM TABLE Mytable Nested iteration Table Scan TO TABLE Mytable Consider the consequences of starting at the first row in the table, and updating each row, through the end of the table. Updating the first row (which has an initial value of 1) to 2 would cause an error, as the unique index would be violated since there is already a value of 2 in the table; likewise, updating the second row (which has an initial value of 2) to 3 would also cause a unique key violation, as would all rows through the end of the table, except for the last row. By using deferred updates, this problem is easily avoided. The log records are first constructed to show what the new values for each row will be, the existing rows are deleted, and the new values inserted. Just as with UPDATE commands, INSERT commands may also be deferred for very similar reasons. Consider the following query (there is no clustered index or unique index on the "roysched" table): Query 2: INSERT roysched SELECT * FROM roysched SHOWPLAN 2: STEP 1 The type of query is INSERT The update mode is deferred FROM TABLE roysched Nested iteration Table Scan TO TABLE roysched Since there is no clustered index on the table, the new rows will be added to the end of the table. The query processor needs to be able to differentiate between the existing rows that are currently in the table (prior to the INSERT command) and the rows which will be inserted, so as to not get into a continuous loop of selecting a row, inserting it at the end of the table, selecting that row that it just inserted, and re-inserting it again. By using the deferred method of inserting, the log records can be first be constructed to show all of the currently-existing values in the table, then SQL Server will re-read those log records to insert them into the table. The update mode is direct Whenever possible, SQL Server will attempt to use the direct method of applying updates to tables, since it is faster and requires fewer log records to be generated than the deferred method. Depending on the type of command, one or more criteria must be met in order for SQL Server to perform the update using the direct method. Those criteria are: * INSERT: For the direct update method to be used for INSERT operations, the table into which the rows are being inserted cannot be a table which is being read from in the same command. The second query example in the previous section demonstrates this, where the rows are being inserted into the same table in which they are being selected from. In addition, if rows are being inserted into the target table, and one or more of the target table's columns appear in the WHERE clause of the query then the deferred method, rather than the direct method, will be used. * SELECT INTO: When a table is being populated with data by means of a SELECT INTO command, the direct method will always be used to insert the new rows. * DELETE: For the direct update method to be used for DELETE operations, the query optimizer must be able to determine that either 0 or 1 rows qualify for the delete. The only means for it to verify this is to check that there is a unique index on the table, which is qualified in the WHERE clause of the DELETE command, and the target table is not joined with any other table(s). * UPDATE: For the direct update method to be used for UPDATE operations, the same criteria apply as for DELETE: a unique index must exist such that the query optimizer can determine that no more than 1 row qualifies for the update, and the only table in the UPDATE command is the target table to update. In addition, all columns that are being updated must be datatypes that are fixedlength, rather than variable-length. Note that any column that allows NULLs is internally stored by SQL Server as a variable-length datatype column. Query 1: DELETE FROM authors WHERE au_id = "172-32-1176" SHOWPLAN 1: STEP 1 The type of query is DELETE The update mode is direct FROM TABLE authors Nested iteration Using Clustered Index TO TABLE authors Query 2: UPDATE titles SET type = "popular_comp" WHERE title_id = "BU2075" SHOWPLAN 2: STEP 1 The type of query is UPDATE The update mode is direct FROM TABLE titles Nested iteration Using Clustered Index TO TABLE titles Query 3: UPDATE titles SET price = $5.99 WHERE title_id = "BU2075" SHOWPLAN 3: STEP 1 The type of query is UPDATE The update mode is deferred FROM TABLE titles Nested iteration Using Clustered Index TO TABLE titles Note that the only difference between the second and third example queries is the column of the table which is being updated. In the second query, the direct update method is used, whereas in the third query, the deferred method is used. This difference is due to the datatype of the column being updated: the titles.type column is defined as "char(12) NOT NULL", while the titles.price column is defined as "money NULL". Since the titles.price column is not a fixed-length datatype, the direct method cannot be used. GROUP BY This statement appears in the SHOWPLAN output for any query that contains a GROUP BY clause. Queries that contain a GROUP BY clause will always be at least two-step queries: one step to select the qualifying rows into a worktable and group them, and another step to return the rows from the worktable. The following example illustrates this: Query: SELECT type, AVG(advance), SUM(ytd_sales) FROM titles GROUP BY type SHOWPLAN: STEP 1 The type of query is SELECT (into a worktable) GROUP BY Vector Aggregate FROM TABLE titles Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT FROM TABLE Worktable Nested iteration Table Scan Scalar Aggregate Transact-SQL includes the aggregate functions: * AVG() * COUNT() * COUNT(*) * MAX() * MIN() * SUM() Whenever an aggregate function is used in a SELECT statement that does not include a GROUP BY clause, it produces a single value, regardless of whether it is operating on all of the rows in a table or on a subset of the rows defined by a WHERE clause. When an aggregate function produces a single value, the function is called a "scalar aggregate", and is listed as such by SHOWPLAN. The following example shows the use of scalar aggregate functions: Query: SELECT AVG(advance), SUM(ytd_sales) FROM titles WHERE type = "business" SHOWPLAN: STEP 1 The type of query is SELECT Scalar Aggregate FROM TABLE titles Nested iteration Table Scan STEP 2 The type of query is SELECT Table Scan Notice that SHOWPLAN considers this a two-step query, which is very similar to the SHOWPLAN from the GROUP BY query listed earlier. Since the query contains a scalar aggregate, which will return a single value, SQL Server keeps internally a "variable" to store the result of the aggregate function. It can be thought of as a temporary storage space to keep a running total of the aggregate function as the qualifying rows from the table are evaluated. After all rows have been evaluated from the table (Step 1), the final value from the "variable" is then selected (Step 2) to return the scalar aggregate result. Vector Aggregate When a GROUP BY clause is used in a query which also includes an aggregate function, the aggregate function produces a value for each group. These values are called "vector aggregates". The "Vector Aggregate" statement from SHOWPLAN indicates that the query includes a vector aggregate. Below is an example query and SHOWPLAN which includes a vector aggregate: Query: SELECT title_id, AVG(qty) FROM sales GROUP BY title_id SHOWPLAN: STEP 1 The type of query is SELECT (into a worktable) GROUP BY Vector Aggregate FROM TABLE sales Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT FROM TABLE Worktable Nested iteration Table Scan FROM TABLE This SHOWPLAN step indicates the table that the query is reading from. In most queries, the "FROM TABLE" will be followed on the next line by the name of the table which is being selected from. In other cases, it may indicate that it is selecting from a worktable (discussed later). The main importance of examining the table names after the "FROM TABLE" output is to determine the order in which the query optimizer is joining the tables. The order of the tables listed after the "FROM TABLE" statements in the SHOWPLAN output indicate the same order that the tables were joined; this order may be (and often times is) different than the order that they are listed in the FROM clause of the query, or the order that they appear in the WHERE clause of the query. This is because the query optimizer examines all different join orders for the tables involved, and picks the join order that will require the least amount of I/O's. Query: SELECT authors.au_id, au_fname, au_lname FROM authors, titleauthor, titles WHERE authors.au_id = titleauthor.au_id AND titleauthor.title_id = titles.title_id AND titles.type = "psychology" SHOWPLAN: STEP 1 The type of query is SELECT FROM TABLE titles Nested iteration Table Scan FROM TABLE titleauthor Nested iteration Table Scan FROM TABLE authors Nested iteration Table Scan This query illustrates the order in which the SQL Server query optimizer chooses to join the tables, which is not the order that they were listed in the FROM clause or the WHERE clause. By examining the order of the "FROM TABLE" statements, it can be seen that the qualifying rows from the titles table are first located (using the search clause ). Those rows are then joined with the titleauthor table (using the join clause ), and finally the titleauthor table is joined with the authors table to retrieve the desired columns (using the join clause ). TO TABLE When a command is issued which makes or attempts to make a modification to one or more rows of a table, such as INSERT, DELETE, UPDATE, or SELECT INTO, the "TO TABLE" statement will show the target table which is being modified. For some operations which require an intermediate step which inserts rows into a worktable (discussed later), the "TO TABLE" will indicate that the results are going to the "Worktable" table, rather than a user table. The following examples illustrate the use of the "TO TABLE" statement: Query 1: INSERT sales VALUES ("8042", "QA973", "7/15/92", 7, "Net 30", "PC1035") SHOWPLAN 1: STEP 1 The type of query is INSERT The update mode is direct Table Scan TO TABLE sales Query 2: UPDATE publishers SET city = "Los Angeles" WHERE pub_id = "1389" SHOWPLAN 2: STEP 1 The type of query is UPDATE The update mode is deferred FROM TABLE publishers Nested iteration Using Clustered Index TO TABLE publishers Notice that the SHOWPLAN for the second query indicates that the publishers table is used both as the "FROM TABLE" as well as the "TO TABLE". In the case of UPDATE operations, the optimizer needs to read the table which contains the row(s) to be updated, resulting in the "FROM TABLE" statement, and then needs to modify the row(s), resulting in the "TO TABLE" statement. Worktable For some types of queries, such as those that require the results to be ordered or displayed in groups, the SQL Server query optimizer may determine that it is necessary to create its own temporary worktable. The worktable is used to hold the intermediate results of the query, at which time the result rows can be ordered or grouped, and then the final results selected from that worktable. When all results have been returned, the worktable is automatically dropped. The worktables are always created in the Tempdb database, so it is possible that the system administrator may have to increase the size of Tempdb to accomodate that queries which require very large worktables. Since the query optimizer creates these worktables for its own internal use, the names of the worktables will not be listed in the tempdb..sysobjects table. Worktables will always need to be used when a query contains a GROUP BY clause. For queries involving ORDER BY, it is possible that the ordering can be done without the use of the worktable. If there is a clustered index on the column(s) in the ORDER BY clause, the optimizer knows that the rows are already stored in sorted order, so a sort in a worktable is not necessary (although there are exceptions to this, depending on the sort order which is installed on the server). Since the data is not stored in sorted order for nonclustered indexes, the worktable will not be necessary if the cheapest access plan is by using the nonclustered index. However, if the optimizer determines that scanning the entire table will require fewer I/Os than using the nonclustered index, then a worktable will need to be created for the ordering of the results. The following examples illustrate the use of worktables: Query 1: SELECT type, AVG(advance), SUM(ytd_sales) FROM titles GROUP BY type SHOWPLAN 1: STEP 1 The type of query is SELECT (into a worktable) GROUP BY Vector Aggregate FROM TABLE titles Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT FROM TABLE Worktable Nested iteration Table Scan Query 2: SELECT * FROM authors ORDER BY au_lname, au_fname SHOWPLAN 2: STEP 1 The type of query is INSERT The update mode is direct Worktable created for ORDER BY FROM TABLE authors Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable Using GETSORTED Table Scan Query 3: SELECT * FROM authors ORDER BY au_id SHOWPLAN 3: STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Table Scan In the third example above, notice that no worktable was created for the ORDER BY clause. This is because there is a unique clustered index on the authors.au_id column, so the data is already stored in sorted order based on the au_id value, and an additional sort for the ORDER BY is not necessary. In the second example, there is a composite nonclustered index on the columns au_lname and au_fname. However, since the optimizer chose not to use the index, and due to the sort order on the SQL Server, a worktable needed to be created to accomodate the sort. Worktable created for SELECT_INTO SQL Server's SELECT INTO operation performs two functions: it first creates a table with the exact same structure as the table being selected from, and then it insert all rows which meet the WHERE conditions (if a WHERE clause is used) of the table being selected from. The "Worktable created for SELECT_INTO" statement is slightly misleading, in that the "worktable" that it refers to is actually the new physical table that is created. Unlike other worktables, it is not dropped when the query finishes executing. In addition, the worktable is not created in Tempdb, unless the user specifies Tempdb as the target database for the new table. Query: SELECT * INTO seattle_stores FROM stores WHERE city = "seattle" SHOWPLAN: STEP 1 The type of query is TABCREATE STEP 2 The type of query is INSERT The update mode is direct Worktable created for SELECT_INTO FROM TABLE stores Nested iteration Table Scan TO TABLE Worktable Worktable created for DISTINCT When a query is issued which includes the DISTINCT keyword, all duplicate rows are excluded from the results so that only unique rows are returned. To accomplish this, SQL Server first creates a worktable to store all of the results of the query, including duplicates, just as though the DISTINCT keyword was not included. It then sorts the rows in the worktable, and is able to easily discard the duplicate rows. Finally, the rows from the worktable are returned, which insures that no duplicate rows will appear in the output. Query: SELECT DISTINCT city FROM authors SHOWPLAN: STEP 1 The type of query is INSERT The update mode is direct Worktable created for DISTINCT FROM TABLE authors FROM TABLE authors Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable Using GETSORTED Table Scan Worktable created for ORDER BY As discussed previously, queries which include an ORDER BY clause will often require the use of a temporary worktable. When the optimizer cannot use an available index for the ordering, it creates a worktable for use in sorting the result rows prior to returning them. Below is an example which shows the worktable being created for the ORDER BY clause: Query: SELECT * FROM authors ORDER BY city SHOWPLAN: STEP 1 The type of query is INSERT The update mode is direct Worktable created for ORDER BY FROM TABLE authors FROM TABLE authors Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable Using GETSORTED Table Scan Worktable created for REFORMATTING When joining tables, SQL Server may in some cases choose to use a "reformatting strategy" to join the tables and return the qualifying rows. This strategy is only considered as a last resort, when the tables are large and neither table in the join has a useful index to use. The reformatting strategy inserts the rows from the smaller of the two tables into a worktable. Then, a clustered index is created on the worktable, and the clustered index is then used in the join to retrieve the qualifying rows from each table. The main cost in using the reformatting strategy is the time and I/Os necessary to build the clustered index on the worktable; however, that cost is still cheaper than joining the tables with no index. If user queries are using the reformatting strategy, it is generally a good idea to examine the tables involved and create indexes on the columns of the tables which are being joined. The following example illustrates the reformatting strategy. Since none of the tables in the Pubs database are large enough for the optimizer to consider using this strategy, two new tables are used. Each table has 5 columns defined as "char(200)". Tab1 has 500 rows and Tab2 has 250 rows. Query: SELECT Tab1.col1 FROM Tab1, Tab2 WHERE Tab1.col1 = Tab2.col1 SHOWPLAN: STEP 1 The type of query is INSERT The update mode is direct Worktable created for REFORMATTING FROM TABLE Tab2 Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT FROM TABLE Tab1 Nested iteration Table Scan FROM TABLE Worktable Nested iteration Using Clustered Index This step involves sorting This SHOWPLAN statement indicates that the query must sort the intermediate results before returning them to the user. Queries that specify DISTINCT will require an intermediate sort, as well as queries that have an ORDER BY clause which cannot use an available index. As stated earlier, the results are put into a worktable, and the worktable is then sorted. The example on the following page demontrates a query which requires a sort: Query: SELECT DISTINCT state FROM stores SHOWPLAN: STEP 1 The type of query is INSERT The update mode is direct Worktable created for DISTINCT FROM TABLE stores FROM TABLE stores Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable Using GETSORTED Table Scan Using GETSORTED This statement indicates one of the ways in which the result rows can be returned from a table. In the case of "Using GETSORTED", the rows will be returned in sorted order. However, not all queries which return rows in sorted order will have this step. In the case of a query which has an ORDER BY clause, and an index with the proper sort sequence exists on those columns being ordered, an intermediate sort may not be necessary, and the rows can simply be returned in order by using the available index. The "Using GETSORTED" method is used when SQL Server must first create a temporary worktable to sort the result rows, and then return them in the proper sorted order. The following example shows a query which requires a worktable to be created and the rows returned in sorted order: Query: SELECT au_id, au_lname, au_fname, city FROM authors ORDER BY city SHOWPLAN: STEP 1 The type of query is INSERT The update mode is direct Worktable created for ORDER BY FROM TABLE authors FROM TABLE authors Nested iteration Table Scan TO TABLE Worktable STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable Using GETSORTED Table Scan Nested iteration The "Nested iteration" is the default technique used to join tables and/or return rows from a table. It simply indicates that the optimizer is using one or more sets of loops to go through a table and retrieve a row, qualify the row based on the search criteria given in the WHERE clause, return the row to the front-end, and loop again to get the next row. The method in which it gets the rows (such as using an available index) is discussed later. The following example shows the optimizer doing nested iterations through each of the tables in the join: Query: SELECT title_id, title FROM titles, publishers WHERE titles.pub_id = publishers.pub_id AND publishers.pub_id = '1389' SHOWPLAN: STEP 1 The type of query is SELECT FROM TABLE publishers Nested iteration Using Clustered Index FROM TABLE titles Nested iteration Table Scan EXISTS TABLE : nested iteration This SHOWPLAN step is very similar to the previous one of "Nested iteration". The difference, however, is that this step indicates a nested iteration on a table which is part of an existence test in a query. There are several ways an existence test can be written in Transact-SQL, such as "EXISTS", "IN", or "=ANY". Prior to SQL Server version 4.2, queries which contained an IN clause followed by a subquery were treated as table joins. Beginning with version 4.2, these queries are now treated the same as if they were written with an EXISTS clause. The following examples demonstrate the SHOWPLAN output with queries which test for existence of values: Query 1: SELECT au_lname, au_fname FROM authors WHERE EXISTS (SELECT * FROM publishers WHERE authors.city = publishers.city) SHOWPLAN 1: STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Table Scan FROM TABLE publishers EXISTS TABLE : nested iteration Table Scan Query 2: SELECT title FROM titles WHERE pub_id IN (SELECT pub_id FROM publishers WHERE city LIKE "B%") SHOWPLAN 2: STEP 1 The type of query is SELECT FROM TABLE titles Nested iteration Table Scan FROM TABLE publishers EXISTS TABLE : nested iteration Table Scan Table Scan This SHOWPLAN statement indicates which method was used to retrieve the physical result rows from the given table. When the "table scan" method is used, the execution begins with the first row in the table; each row is then retrieved and compared with the conditions in the WHERE clause, and returned to the front-end if it meets the given criteria. Regardless of how many rows qualify, every row in the table must be looked at, so for very large tables, a table scan can be very costly in terms of page I/Os. If a table has one or more indexes on it, the query optimizer may still choose to do a table scan instead of using one of the available indexes if the optimizer determines that the indexes are too costly or are not useful for the given query. The following query shows a typical table scan: Query: SELECT au_lname, au_fname FROM authors SHOWPLAN: STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Table Scan Using Clustered Index This SHOWPLAN statement indicates that the query optimizer chose to use the clustered index on a table to retrieve the rows. Unlike a table scan, using an index to retrieve rows does not require the optimizer to examine every row in the table (unless the WHERE clause applies to all rows). For queries which return a small percentage of the rows from a large table, the savings in terms of I/Os of using an index versus doing a table scan can be very significant. The following query shows the clustered index being used to retrieve the rows from the table: Query: SELECT title_id, title FROM titles WHERE title_id LIKE "PS2%" SHOWPLAN: STEP 1 The type of query is SELECT FROM TABLE titles Nested iteration Using Clustered Index Index : Like the previous statement with the clustered index, this statement indicates that the optimizer chose to use an index to retrieve the rows instead of doing a table scan. The Followup-To: comp.databases.sybase Date: 1 Jul 1996 14:30:45 GMT Organization: Silicon Graphics, Inc. Nederland, CO. USA Lines: 2559 Approved: news-answers-request@MIT.EDU Message-ID: References: Reply-To: pablo@sgi.com NNTP-Posting-Host: mew.corp.sgi.com Summary: Info about SQL Server, bcp, isql and other goodies Posting-Frequency: monthly Originator: pablo@mew.corp.sgi.com Xref: news1.ucsd.edu comp.databases.sybase:29421 comp.answers:15544 news.answers:62012 Archive-name: databases/sybase-faq/part5 URL: http://reality.sgi.com/pablo/Sybase_FAQ Q4.1: SP_FREEDEVICE _________________________________________________________________ use master go drop proc sp_freedevice go create proc sp_freedevice @devname char(30) = null as begin declare @showdev bit declare @alloc int if @devname = null select @devname = "%" , @showdev = 0 else select @showdev = 1 select @alloc = low from master.dbo.spt_values where type = "E" and number = 1 create table #freedev (name char(30), size float, used float) insert #freedev select dev.name, ((dev.high - dev.low) * @alloc + 500000) / 1048576, sum((usg.size * @alloc + 500000) / 1048576) from master.dbo.sysdevices dev, master.dbo.sysusages usg where dev.low <= usg.size + usg.vstart - 1 and dev.high >= usg.size + usg.vstart - 1 and dev.cntrltype = 0 group by dev.name insert #freedev select name, ((high - low) * @alloc + 500000) / 1048576, 0 from master.dbo.sysdevices where cntrltype = 0 and not exists (select * from #freedev where name = master.dbo.sysdevices.name) if @showdev = 1 begin select devname = dev.name, size = convert(varchar(10),f.size) + " MB", used = convert(varchar(10),f.used) + " MB", free = convert(varchar(10),f.size - f.used) + " MB" from master.dbo.sysdevices dev, #freedev f where dev.name = f.name and dev.name like @devname select dbase = db.name, size = convert(varchar(10),((usg.size * @alloc) + 500000) / 1048576) + " MB", usage = vl.name from master.dbo.sysdatabases db, master.dbo.sysusages usg, master.dbo.sysdevices dev, master.dbo.spt_values vl where db.dbid = usg.dbid and usg.segmap = vl.number and dev.low <= usg.size + usg.vstart - 1 and dev.high >= usg.size + usg.vstart - 1 and dev.status & 2 = 2 and vl.type = "S" and dev.name = @devname end else begin select total = convert(varchar(10), sum(size)) + " MB", used = convert(varchar(10), sum(used)) + " MB", free = convert(varchar(10), sum(size) - sum(used)) + " MB" from #freedev select devname = dev.name, size = convert(varchar(10),f.size) + " MB", used = convert(varchar(10),f.used) + " MB", free = convert(varchar(10),f.size - f.used) + " MB" from master.dbo.sysdevices dev, #freedev f where dev.name = f.name end end go grant execute on sp_freedevice to public go _________________________________________________________________ Q4.2: SP_WHODO _________________________________________________________________ Sybase System 10.x use master go drop procedure sp_whodo go create procedure sp_whodo @loginame varchar(30) = NULL as declare @low int declare @high int declare @spidlow int declare @spidhigh int select @low = 0, @high = 32767, @spidlow = 0, @spidhigh = 32767 if @loginame is not NULL begin select @low = suser_id(@loginame), @high = suser_id(@loginame) if @low is NULL begin if @loginame like "[0-9]%" begin select @spidlow = convert(int, @loginame), @spidhigh = convert(int, @loginame), @low = 0, @high = 32767 end else begin print "No login exists with the supplied name." return (1) end end end select spid, status, substring(suser_name(suid),1,12) loginame, hostname, convert(char(3),blocked) blk, convert(char(7),isnull(time_blocked, 0)) blk_sec, convert(char(16),program_name) program, convert(char(7),db_name(dbid)) dbname, convert(char(16),cmd) cmd, convert(char(6),cpu) cpu, convert(char(7),physical_io) io, convert(char(16),isnull(tran_name, "")) tran_name from master..sysprocesses where suid >= @low and suid <= @high and spid >= @spidlow and spid <= @spidhigh return (0) go grant execute on sp_whodo to public go Sybase 4.x use master go drop procedure sp_whodo go create procedure sp_whodo @loginame varchar(30) = NULL as declare @low int declare @high int declare @spidlow int declare @spidhigh int select @low = 0, @high = 32767, @spidlow = 0, @spidhigh = 32767 if @loginame is not NULL begin select @low = suser_id(@loginame), @high = suser_id(@loginame) if @low is NULL begin if @loginame like "[0-9]%" begin select @spidlow = convert(int, @loginame), @spidhigh = convert(int, @loginame), @low = 0, @high = 32767 end else begin print "No login exists with the supplied name." return (1) end end end select spid, status, substring(suser_name(suid),1,12) loginame, hostname, convert(char(3),blocked) blk, convert(char(16),program_name) program, convert(char(7),db_name(dbid)) dbname, convert(char(16),cmd) cmd, convert(char(6),cpu) cpu, convert(char(7),physical_io) io from master..sysprocesses where suid >= @low and suid <= @high and spid >= @spidlow and spid <= @spidhigh return (0) go grant execute on sp_whodo to public go _________________________________________________________________ Q4.3: GENERATING DUMP/LOAD DATABASE COMMAND. _________________________________________________________________ #!/bin/sh # # This script calls the function gen_dumpload_command to generate # either a dump or a load command. # # This function works for both System 10 and Sybase 4.x # installations. You simply need to change your method of thinking. # In Sybase 4.x, we only had a single stripe. In System 10, most # of the time we define a single stripe but in our bigger databases # we define more stripes. # # Therefore, everything is a stripe. Whether we use one stripe or # many... cool? Right on! # # # The function gen_dumpload_command assumes that all dump devices # adhere to the following naming convention: # # stripe_NN_database # # NOTE: If your shop is different search for "stripe" and replace # with your shop's value. # # # gen_dumpload_command(): # # purpose: to generate a dump/load to/from command based on # what is defined in sysdevices. The environment # variable D_DEV is set. # # return: zero on success, non-zero on failure. # # sets var: D_DEV is set with the actual dump/load command; # stripe devices are also handled. # # calls: *none* # # parms: 1 = DSQUERY # 2 = PASSWD # 3 = DB # 4 = CMD -> "dump" or "load" # gen_dumpload_command() { LOCAL_DSQUERY=$1 LOCAL_PASSWD=$2 DB_TO_AFFECT=$3 CMD=$4 # dump/load if [ "$CMD" = "dump" ] ; then VIA="to" else VIA="from" fi # Check for a dump device echo "Checking for standard $CMD device" D_DEV=`echo "$LOCAL_PASSWD select name from sysdevices where name like \"stripe%_$DB_TO_AFFECT\" go" | $SYBIN/isql -U sa -S $LOCAL_DSQUERY -w1000 | sed -n -e '/stripe/p' | \ nawk '{ if (NR == 1) print "'$CMD' database '$DB_TO_AFFECT' '$VIA'", $0 else print "stripe on", $0 }'` if [ -z "$D_DEV" ] ; then # nothing defined... :( return 1 fi return 0 } SYBIN=$SYBASE/bin gen_dumpload_command MAG_LOAD_2 thissux wcid "dump" if [ $? -eq 1 ] ; then echo "Error..." fi # so what does this generate? :-) echo $D_DEV # ... and it can be used as follows: echo "$PASSWD $D_DEV go" | isql .... exit 0 _________________________________________________________________ Q4.4: SYBPERL FAQ _This is Michael Peppler's mpeppler@itf.ch FAQ._ _________________________________________________________________ Index of Sections * Sybperl? What is that? + What is Sybperl? + Where can I get Sybperl? + Can I get Sybperl for Windows/NT? + Can I get Sybperl for OS foo? + Is there a version of Sybperl for Perl 5? * Building and Installing + Where is uperl.o? + make test doesn't work + How to get Dynamic Linking under HP-UX? + How do I set PERL_SRC? + I've moved the Sybase libraries, and Sybperl won't run... + Sybperl won't run as a CGI script * Sybperl 1.x vs Sybperl 2.x + Are Sybperl 1.x scripts compatible with 2.x? + Is Sybperl 2.x stable enough to use, or should I use 1.x for production code? * Documentation + Is there any good sybperl documentation? + Is there a sybperl FAQ? + Is there a sybperl mailing list? * Improving sybperl + Reporting bugs + Feature requests _________________________________________________________________ Sybperl? What is that? What is Sybperl? Matthew Healy wrote this in a recent comp.databases.sybase post: Perl is an interpreted programming language discussed in _comp.lang.perl.*_ newsgroups; though it's interpreted, the interpreter has an internal compiler built-in, so medium-size programs still run quite efficiently. It has become very popular among people who database and/or system administration type work for several reasons: * Very powerful and flexible string-manipulation facilities; anything you can do with awk, grep, sed, etc. can be done in Perl -- and it even comes with utilities that convert scripts in those languages into Perl! * No arbitrary limits on the size of any object; you can slurp up an entire file into a string variable if you have enough _RAM._ * An incredibly useful feature called associative arrays, which often gives you a quick and easy way of doing things that would otherwise require going back to your data structures textbooks. * Versions are available for nearly every platform you've ever heard of. * It's _free_! Sybperl is a package of extensions to Perl that basically add the Sybase db_library API calls to the Perl language; the combination is an extremely powerful scripting tool for Sybase DBA's and programmers because it adds the existing strengths of Perl to the Sybase API. So, when would you use it? Anytime the combination of shell scripts with isql is too limited but writing a C program is overkill. In particular, since Perl has become the language of choice for many WWW gurus, Sybperl has become the tool of choice for integrating Sybase databases with the Web. And there are now some toolkits written in Sybperl that make it even simpler; my favorite among those is WDB. Here are some URLs to check out: http://www.sybase.com/WWW/sybase_www_faq.html http://www.sybase.com/WWW/ http://www.sybase.com/WWW/sybase_www_tools.html http://www.sybase.com/WWW/Sybperl/index.html http://arch-http.hq.eso.org/bfrasmus/wdb/wdb.html http://arch-http.hq.eso.org/bfrasmus/wdb/distribution/install.html ftp://ftp.demon.co.uk/pub/perl/db/perl4/sybperl/ ftp://ftp.demon.co.uk/pub/perl/db/mod/Sybase/ ftp://ftp.cis.ufl.edu/pub/perl/ ftp://ftp.perl.com/ Where can I get Sybperl? Sybperl is available from CPAN (the Comprehensive Perl Archive Network) The CPAN master is at ftp://ftp.funet.fi/pub/languages/perl/CPAN and Sybperl is archived in ftp://ftp.funet.fi/pub/languages/perl/CPAN/authors/id/MEWP/sybperl-2 .0.tar.gz CPAN is mirrored widely, please select the CPAN site nearest you to keep the networks happy. At the moment the registered CPAN sites are: * Africa ftp://ftp.is.co.za/programming/perl/CPAN/ * Australasia ftp://coombs.anu.edu.au/pub/perl/ ftp://ftp.mame.mu.oz.au/pub/perl/CPAN/ * Europe ftp://ftp.funet.fi/pub/languages/perl/CPAN/ ftp://ftp.sunet.se/pub/lang/perl/CPAN/ ftp://ftp.cs.ruu.nl/pub/PERL/CPAN/ ftp://ftp.demon.co.uk/pub/mirrors/perl/CPAN/ ftp://ftp.pasteur.fr/pub/computing/unix/perl/CPAN/ ftp://ftp.rz.ruhr-uni-bochum.de/pub/programming/languages/perl/CPAN / ftp://ftp.switch.ch/mirror/CPAN/ ftp://orpheu.ci.uminho.pt/pub/lang/perl/ * North America ftp://ftp.cis.ufl.edu/pub/perl/CPAN/ ftp://uiarchive.cso.uiuc.edu/pub/lang/perl/CPAN/ ftp://ftp.delphi.com/pub/mirrors/packages/perl/CPAN/ ftp://ftp.sedl.org/pub/mirrors/CPAN/ ftp://ftp.sterling.com/programming/languages/perl/ _Version 2.x_ is also available on Sybase's Web page: http://www.sybase.com/WWW/Sybperl/index.html I try to make sure that the Sybase Web page is up to date, but I don't control it... Can I get Sybperl for Windows/NT? Perl is available for Windows/NT from ftp://ntperl.hip.com However, certain key components that are used to add modules to Perl are missing in the current version, which makes things difficult for us Sybperl users. However, I _know_ that it is possible to get Sybperl to work under Windows/NT (I would venture to guess that most of the work is in creating appropriate Makefiles). Contact me if you are interested in attempting this. Can I get Sybperl for OS foo? Perl is primarily a Unix tool, and Sybperl was developped under SunOS. However, Perl has been ported to numerous other OSes (MS-DOS, Windows/NT, OS/2, VMS), and Sybperl should theortically be portable to these OSes as well, in particular with Perl 5's better extension mechanism. I am always ready to provide any help I can to anyone wishing to port Sybperl to any particular OS/platform. Is there a version of Sybperl for Perl 5? Yes. _Sybperl 2.x_ works only with _Perl 5_. _Sybperl 1.x_ does _not_ work with _Perl 5_, as the Perl extension mechanism was changed with _Perl 5_. _________________________________________________________________ Building and Installing Where is uperl.o (1.x)? _uperl.o_ is the object file that _sybperl 1.x_ needs to be linked with to give it access to all the Perl routines. _uperl.o_ is normally created when you build _perl 4.036_, but if you have run make clean since building, you can recreate it by running _make uperl.o_. If you have _Perl 5.x_ you need _sybperl 2.x_ make test doesn't work 1. The 'interfaces' file is not visible. The Sybase libraries need access to a file that contain information on how to connect to the server. This file is normally located at the root of the Sybase installation directory, and the Sybase libraries and programs normally use the SYBASE environement variable to find this directory (ie sybperl will try to find $SYBASE/interfaces in order to run). 2. The userid/password combination is invalid. Edit the PWD file to add a valid userid/password combination for accessing your database server. 3. The $SYBASE environment variable is incorrectly set. The Sybase::CTlib modules needs access to its _locales_ information in the $SYBASE/locales directory. How to get Dynamic Linking under HP-UX? The bundled C compiler that comes with HP-UX apparently can't produce position independant code, which is needed to build a dynamically loadable library under HP-UX. The solution there is to use the add-on ANSI-C compiler or GCC. In addition, you can't build a dynamically loadable module of DBlibrary v. 4.x, because it is a statically linked library, and was not compiled using the position independent code flag. So the end result is: to get a dynamically loadable version you need the Sybase System 10 OpenClient libraries, and a C compiler that is capable of producing position independent code. How do I set PERL_SRC? This problem sometimes appears when building sybperl with a copy of Perl that has not been installed (ie from the source tree): You've run: %perl Makefile.PL %make and the output looked something like this: % make Rebuilding CTlib/Makefile ... /home/mpeppler/PERL/perl5.001/miniperl -I//home/mpeppler/PERL/perl5.001/lib \ -e "use ExtUtils::MakeMaker; MM->runsubdirpl(qw(CTlib))" \ INST_LIB=/home/mpeppler/PERL/perl5.001/lib INST_ARCHLIB=/home/mpeppler/PERL/perl5.001/lib \ INST_EXE=./blib LINKTYPE=dynamic LIBPERL_A=libperl.a Unable to locate Perl source. Try setting PERL_SRC in Makefile.PL or on command line. make: *** [CTlib/Makefile] Error 2 % To do this, you need to add a parameter to the WriteMakefile() call in each of the Makefile.PLs (toplevel, DBlib/Makefile.PL, CTlib/Makefile.PL and Sybperl/Makefile.PL). The parameter should look like this: WriteMakefile(DISTNAME => "sybperl", .... other parameters, PERL_SRC => "/home/mpeppler/src/perl5.001", ...); obviously replacing "/home/mpeppler/src/perl5.001" with the appropriate directory on your system. I've moved the Sybase libraries, and Sybperl won't run... The sybperl make process hard-codes the path to the Sybase shared libraries (libsybdb.so and friends) into the binaries (either the dynamically loadable modules, or the Perl executable). This is done so that Perl can find the libraries it needs even if the LD_LIBRARY_PATH environment variable which is normally used to specify special library directories is not set (as when running a script from cron, for example). This technique obviously fails when the paths to the Sybase libraries are changed (through an upgrade, or when moving the binaries to another machine with a different configuration). The solution is to: * Set LD_LIBRARY_PATH to include the Sybase library directories _or_ * Link the Sybase libraries to a standard directory (such as /usr/lib or /usr/local/lib) which will be searched by default. Sybperl won't run as a CGI script Typical symptoms: your script runs from the command line, but fails when run as a CGI script. The problem is (probably) that you normally have LD_LIBRARY_PATH set when running Sybperl scripts, but the HTTP deamon does not normally set this environment variable when executing child processes. It is not possible to set the LD_LIBRARY_PATH environment variable in the script itself (the variable needs to be set before the execution starts), so you may need to write a small shell wrapper that sets this environment variable and then execs your script. Alternatively, link the Sybase shared libraries to one of the _default_ directories (such as /usr/lib). _________________________________________________________________ Sybperl 1.x vs Sybperl 2.x Are sybperl 1.x scripts compatible with 2.x? Yes. With these exceptions: @var means different things to Perl and to Transact-SQL. If you use @var as SQL code (typically: "declare @var int exec my_proc @var out") you need to escape the @ (as in \@var). If you were in the habit of making calls to the _sybperl 1.x_ subs without parens (ie &dbnextrow; instead of &dbnextrow()) then there are certain situations where the dbnextrow implementation in sybperl 2.x may try to use an invalid DBPROCESS pointer. This problem does not exist if your scripts always pass explicit DBPROCESS parameters. Here at ITF I've linked /usr/local/bin/perl to /usr/local/bin/sybperl and all my old sybperl scripts work, provided that they had a "require 'sybperl.pl';" at the top. Is sybperl 2.x stable enough to use, or should I use 1.x for production code? _Sybperl 2.x_ is composed of three modules: Sybase::DBlib, Sybase::CTlib and Sybase::Sybperl. The DBlib and Sybperl modules are stable, and I recommend their use in production code (as I've been doing here for over a year now). The Sybase::CTlib module is in _beta_, and the API may be slightly modified in the future. In addition to that, I've noticed a performance problem with the CTlib module which appears to make it quite a bit slower than the DBlib module for equivalent code. This is something that I still need to investigate. The advantage (if one can call it that) of staying with _sybperl 1.x_ is that the code is frozen - both for _sybperl_ itself and for _perl_. This means that any bugs that you code around will not come back to bite you as you upgrade from one release to the other. _________________________________________________________________ Documentation Is there any good sybperl documentation? There is a Sybperl man page (in the current release it comes out to 13 PostScript pages), but that is definitely _not_ sufficient in itself. You need _Perl_ documentation (there are over 370 pages of documentation in the standard Perl 5 release...). There are several good Perl books (in particular 'Programming Perl', Larry Wall & Randal Schwartz, O'Reilly and 'Learning Perl', Randal Schwartz, O'Reilly) And you need _Sybase_ documentation, in particular the Sybase OpenClient/C manual (I've got mine online via the Sybooks CD-ROM) and the Sybase Transact-SQL manual. Is there a sybperl FAQ? Yes - you're reading it :-) But, more importantly, you should get the Perl FAQ, which is posted monthly in comp.lang.perl.announce, and which can be ftp'd from ftp://ftp.cis.ufl.edu/pub/perl/doc/FAQ And you need the Sybase FAQ too. http://reality.sgi.com/employees/pablo_corp/Sybase_FAQ Is there a sybperl mailing list? Yes. The lists address is sybperl-l@trln.lib.unc.edu It was moved recently from _sybperl-list@itf.ch_ due to the high cost of supporting the list for my company. To subscribe, send a message to sybperl-l-REQUEST@trln.lib.unc.edu with add "your name" _________________________________________________________________ Improving sybperl Reporting bugs I have a simple bug tracking database here at ITF. You can submit bugs for sybperl either to myself, or directly into the bug database by sending an e-mail message to bugtrack@itf.ch with the following additional fields: bug-category: sybperl bug-priority: high [or medium, or low] bug-type: bug [or feature, or change (for behavior change) or other] bug-summary: One line description The remainder of the message can be used to describe the bug or feature request in detail, with examples, etc. Feature requests If you have any suggestions regarding how _sybperl_ can be improved, please send them to me - I am always happy to try to add features :-) _________________________________________________________________ Q4.5: DBSCHEMA.PL _________________________________________________________________ In order to use this script you must have Sybperl installed -- see Q4.4 for more information. #! /bin/sybperl # # @(#)dbschema.pl 1.11 2/22/94 # # # dbschema.pl A script to extract a database structure from # a Sybase database # # Written by: Michael Peppler (mpeppler@itf.ch) # Last Modified: 22 Feb 1994 # # Usage: dbschema.pl -d database -o script.name -t pattern -s server -v # where database is self-explanatory (default: master) # script.name is the output file (default: script.isq l) # pattern is the pattern of object names (in sysobjec ts) # that we will look at (default: %), and server is # the server to connect to (default, the value of $EN V{DSQUERY}). # # -v turns on a verbose switch. # # Changes: 11/18/93 - bpapp - Put in interactive SA password prompt # 11/18/93 - bpapp - Get protection information for views and # stored procedures. # 02/22/94 - mpeppler - Merge bpapp's changes with itf version # #------------------------------------------------------------------------------ require 'sybperl.pl'; require '/usr/lib/perl/getopts.pl'; require '/usr/lib/perl/ctime.pl'; @nul = ('not null','null'); select(STDOUT); $| = 1; # make unbuffered do Getopts('d:t:o:s:v'); $opt_d = 'master' unless $opt_d; $opt_o = 'script.isql' unless $opt_o; $opt_t = '%' unless $opt_t; $opt_s = $ENV{DSQUERY} unless $opt_s; open(SCRIPT, "> $opt_o") || die "Can't open $opt_o: $!\n"; open(LOG, "> $opt_o.log") || die "Can't open $opt_o.log: $!\n"; # # Log us in to Sybase as 'sa' and prompt for admin password. # print "\nAdministrative account password: "; system("stty -echo"); chop($sapw = ); system("stty echo"); # Use this if your version of sybperl wants the DSQUERY on the dblogin line #$dbproc = &dblogin("sa", $sapw, $opt_s); # Use this if your version of sybperl doesn't want the DSQUERY on the dblogin line $dbproc = &dblogin("sa", $sapw); &dbuse($dbproc, $opt_d); chop($date = &ctime(time)); print "dbschema.pl on Database $opt_d\n"; print LOG "Error log from dbschema.pl on Database $opt_d on $date\n\n"; print LOG "The following objects cannot be reliably created from the script in $opt_o. Please correct the script to remove any inconsistencies.\n\n"; print SCRIPT "/* This Isql script was generated by dbschema.pl on $date. ** The indexes need to be checked: column names & index names ** might be truncated! */\n"; print SCRIPT "\nuse $opt_d\ngo\n"; # Change to the appropriate database # first, Add the appropriate user data types: # print "Add user-defined data types..."; print SCRIPT "/* Add user-defined data types: */\n\n"; &dbcmd($dbproc, "select s.length, s.name, st.name,\n"); &dbcmd($dbproc, " object_name(s.tdefault),\n"); &dbcmd($dbproc, " object_name(s.domain)\n"); &dbcmd($dbproc, "from $opt_d.dbo.systypes s, $opt_d.dbo.systypes st\n"); &dbcmd($dbproc, "where st.type = s.type\n"); &dbcmd($dbproc, "and s.usertype > 100 and st.usertype _________________________________________________________________ Q4.6: SYBTCL FAQ _This is Tom Poindexter tpoindex@nyx.net FAQ._ _________________________________________________________________ Index of Sections * Overview * The enabling language platform * Design and commands * Applications * Information Sources * Availability and Support * About the Author _________________________________________________________________ Overview Sybtcl is an extension to Tcl (Tool Command Language) that allows Tcl programs to access Sybase databases. Sybtcl adds additional Tcl commands to login to a Sybase server, send SQL statements, retrieve result sets, execute stored procedures, etc. Sybtcl simplifies Sybase programming by creating a high level interface on top of DB-Library. Sybtcl can be used to program a wide variety of applications, from system administration procedures to end-user applications. _________________________________________________________________ The enabling language platform Tool Command Language, often abbreviated "Tcl" and pronounced as "tickle", was created by Dr. John Ousterhout at the University of California-Berkeley. Tcl is an interpreted script language, similar to Unix shell, Awk, Perl, and others. Tcl was designed to be easily extended, where new commands are added to the base interpreter to provide additional functionality. Core Tcl commands contain all of the usual constructs provided by most programming languages: setting and accessing variables, file read/write, if-then-else, do-while, function calls. Tcl also contains many productivity enhancing commands: list manipulation, associative arrays, and regular expression processing. Tcl has several features that make it a highly productive language. First, the language is interpreted. Interpreters allow execution without a compile and link step. Code can be developed with immediate feedback. Second, Tcl has a single data type: string. While this might at first glance seem to a deficiency, it avoids problems of data conversion and memory management. (This feature doesn't preclude Tcl from performing arithmetic operations.) Last, Tcl has a consistent and simple syntax, much the same as the Unix shell. Every Tcl statement is a command name, followed by arguments. Dr. Ousterhout also developed a companion Tcl extension, called Tk. Tk provides simplified programming of X11 applications with a Motif look and feel. X11 applications can be programmed with 60%-80% less code than equivalent Xt, Motif, or Xview programs using C or C++. Dr. Ousterhout is continuing Tcl/Tk development at Sun Microsystems. Current projects include porting Tcl/Tk to MS-Windows and Macintosh environments, creating a GUI window builder, and a byte-code compiler for the language. _________________________________________________________________ Design and commands Sybtcl was designed to fill the gap between pure applications development tools (e.g. Apt, Powerbuilder, et.al.) and database administration tools, often Unix shell scripts consisting of 'isql' and Awk pipelines. Sybtcl extends the Tcl language with specialized commands for Sybase access. Sybtcl consists of a set of C language functions that interface DB-Library calls to the Tcl language. Instead of a simple one-to-one interface to DB-Library, Sybtcl provides a high-level Sybase programming interface of its own. The following example is a complete Sybtcl program that illustrates the simplified interface. It relies on the Tcl interpreter, "tclsh", that has been extended with Sybtcl. #!/usr/local/bin/tclsh set hand [sybconnect "mysybid" "mysybpasswd"] sybuse $hand pubs2 sybsql $hand "select au_lname, au_fname from authors order by au_lname" sybnext $hand { puts [format "%s, %s" @1 @2] } sybclose $hand exit In this example, a Sybase server connection is established ("sybconnect"), and the "pubs" sample database is accessed ("sybuse"). An SQL statement is sent to the server ("sybsql"), and all rows returned are fetched and printed ("sybnext"). Finally, the connection is closed ("sybclose"). The same program can be made to display its output in an X11 window, with a few changes. The Tcl/Tk windowing shell, "wish", also extended with Sybtcl is used. #!/usr/local/bin/wish listbox .sql_output button .exit -text exit -command exit pack .sql_output .exit set hand [sybconnect "mysybid" "mysybpasswd"] sybuse $hand pubs2 sybsql $hand "select au_lname, au_fname from authors order by au_lname" sybnext $hand { .sql_output insert end [format "%s, %s" @1 @2] } sybclose $hand In addition to these commands, Sybtcl includes commands to access return column names and datatypes ("sybcols"), return values from stored procedures ("sybretval"), reading and writing of "text" or "image" columns ("sybreadtext", "sybwritetext"), canceling pending results ("sybcancel"), and polling asynchronous SQL execution ("sybpoll"). Full access to Sybase server messages is also provided. Sybtcl maintains a Tcl array variable which contains server messages, output from stored procedures ("print"), DB-Library and OS error message. _________________________________________________________________ Applications The Sybtcl distribution includes "Wisqlite", an X11 SQL command processor. Wisqlite provides a typical windowing style environment to enter and edit SQL statements, list results of the SQL execution in a scrollable listbox, save or print output. In addition, menu access to the Sybase data dictionary is provided, listing tables in a database, the column names and datatypes of a table, text of stored procedures and triggers. Other applications included in the Sybtcl distribution include: * a simple graphical performance monitor * a version of "sp_who", with periodic refresh * an enhanced version of Wisqlite, "UCO/Wisql", with point/click SQL generation * an HTML template generator and CGI processor for WWW-Sybase access Sybtcl users have reported a wide variety of applications written in Sybtcl, ranging from end user applications to database administration utilities. _________________________________________________________________ Information Sources Tcl/Tk is described in detail in "Tcl and the Tk Toolkit" by Dr. John Ousterhout, Addison-Wesley Publishing 1994 ISBN: 0-201-63337-X . Another recent publication is "Practical Programming in Tcl and Tk" by Brent Welch, Prentice Hall 1995 ISBN 0-13-182007-9. A forthcoming book on Tcl extensions, including Sybtcl, is due for publication by O'Reilly and Associates in early 1996. A wealth of information on Tcl/Tk is available via Internet sources: news:comp.lang.tcl http://www.sunlabs.com/research/tcl/ http://www.sco.com/Technology/tcl/Tcl.html http://web.cs.ualberta.ca/~wade/HyperTcl/ ftp://ftp.smli.com/pub/tcl ftp://ftp.aud.alcatel.com/tcl/ _________________________________________________________________ Availability and Support Tcl/Tk and Sybtcl are both released in source code form under a "BSD" style license. Tcl/Tk and Sybtcl may be freely used for any purpose, as long as copyright credit is given to the respective owners. Tcl/Tk can be obtained from either anonymous FTP site listed above. Sybtcl is located on ftp.aud.alcatel.com, which serves as the Tcl archive site for Internet users. Other sources are the Fall/Winter 1995 release of Sun's Developer CD-ROM, and the "Tcl/Tk" CD-ROM title from Walnut Creek. Tcl/Tk and Sybtcl can be easily configured under most modern Unix systems including SunOS, Solaris, HP-UX, Irix, OSF/1, AIX, SCO, et.al. Sybtcl requires Sybase's DB-Library, from Sybase's Open Client bundle. Current versions are: * Tcl 7.4: released July 1, 1995 * Tk 4.0: released July 1, 1995 * Sybtcl 2.3: released October 9, 1995 The Internet newsgroup comp.lang.tcl is the focal point for support. The group is regularly read by developers and users alike. Authors may also be reached via email. Sun has committed to keeping Tcl/Tk as freely available software. _________________________________________________________________ About the Author Tom Poindexter is a consultant with expertise in Unix, relational databases, systems and application programming. He holds a B.S. degree from the University of Missouri, and an M.B.A. degree from Illinois State University. He can be reached at tpoindex@nyx.net _________________________________________________________________ Q4.7: Extended Stored Procedures ---------------------------------------------------------------------------- The following stored procedures were written by Ed Barlow ebarlow@pipeline.com and can be ftp'd from the following site: ftp://ftp.pnl.gov/pub/incoming/proc.tar.Z [127K] Here's a pseudo-man page of what you get: Modified Sybase Procedures Command Description sp__help Better sp_help sp__helpdb Database Information sp__helpdevice Break down database devices into a nice report sp__helpgroup List groups in database by access level sp__helpindex Shows indexes by table sp__helpsegment Segment Information sp__helpuser Lists users in current database by group (include aliases) sp__lock Lock information sp__who sp_who that fits on a page Audit Procedures Command Description sp__auditsecurity Security Audit On Server sp__auditdb Audit Current Database For Potential Problems System Administrator Procedures Command Description sp__block Blocking processes. sp__dbspace Summary of current database space information. sp__dumpdevice Listing of Dump devices sp__helpdbdev Show how Databases use Devices sp__helplogin Show logins and remote logins to server sp__helpmirror Shows mirror information, discover broken mirrors sp__segment Segment Information sp__server Server summary report (very useful) sp__vdevno Who's who in the device world DBA Procedures Command Description sp__badindex give information about bad indexes (nulls, bad statistics...) sp__collist list all columns in database sp__indexspace Space used by indexes in database sp__noindex list of tables without indexes. sp__helpcolumns show columns for given table sp__helpdefault list defaults (part of objectlist) sp__helpobject list objects sp__helpproc list procs (part of objectlist) sp__helprule list rules (part of objectlist) sp__helptable list tables (part of objectlist) sp__helptrigger list triggers (part of objectlist) sp__helpview list views (part of objectlist) sp__trigger Useful synopsis report of current database trigger schema Reverse Engineering Command Description sp__revalias get alias script for current db sp__revdb get db creation script for server sp__revdevice get device creation script sp__revgroup get group script for current db sp__revindex get indexes script for current db sp__revlogin get logins script for server sp__revmirror get mirroring script for server sp__revuser get user script for current db Other Procedures Command Description sp__bcp Create unix script to bcp in/out database sp__date Who can remember all the date styles? sp__quickstats Quick dump of server summary information ---------------------------------------------------------------------------- Q4.9: SQL TO DETERMINE SPACE USED FOR AN INDEX _________________________________________________________________ OK, here's _sp_spaceused_ reduced to bare essentials: set nocount on declare @objname varchar(30) select @objname = "your table" select index_name = i.name, i.segment, rowtotal = rowcnt(i.doampg), reserved = (reserved_pgs(i.id, i.doampg) + reserved_pgs(i.id, i.ioampg)), data = data_pgs(i.id, i.doampg), index_size = data_pgs(i.id, i.ioampg), unused = ((reserved_pgs(i.id, i.doampg) + reserved_pgs(i.id, i.ioampg)) - (data_pgs(i.id, i.doampg) + data_pgs(i.id, i.ioampg))) into #space from sysindexes i where i.id = object_id(@objname) You can analyse this in a number of ways: 1. This query should tally with _sp_spaceused @objname_: select 'reserved KB' = sum(reserved) * 2, 'Data KB' = sum(data) * 2, 'Index KB' = sum(index_size) * 2, 'Unused KB' = sum(unused) * 2 from #space 2. This one reports space allocation by segment: select 'segment name' = s.name, 'reserved KB' = sum(reserved) * 2, 'Data KB' = sum(data) * 2, 'Index KB' = sum(index_size) * 2, 'Unused KB' = sum(unused) * 2 from #space t, syssegments s where t.segment = s.segment group by s.name 3. This one reports allocations by index: select t.index_name, s.name, 'reserved KB' = reserved * 2, 'Data KB' = data * 2, 'Index KB' = index_size * 2, 'Unused KB' = unused * 2 from #space t, syssegments s where t.segment = s.segment If you leave out the where clause in the initial _select into_, you can analyse across the whole database. Hope this points you in the right direction. _________________________________________________________________ Q4.10: _DSQL_ - AN _ISQL_ ALTERNATIVE _________________________________________________________________ _dsql_ provides command history similar to _tcsh_. It was written by David Joyner at NCSU. _dsql_ takes advantage of GNU's _readline_ module therefore it must also be downloaded in order to compile _dsql_: ftp://straylight.acs.ncsu.edu/pub/sybase/tools/dsql-1.3.tar.Z [28K] ftp://straylight.acs.ncsu.edu/pub/sybase/tools/readline-1.1.tar.Z [217K] _________________________________________________________________ Q4.11: _XSYBMON_ _________________________________________________________________ _xsybmon_ provides an X interface to Sybase's _sp_monitor_ output. It was written by David Joyner at NCSU. This software may be anonymously ftp'd from: ftp://straylight.acs.ncsu.edu/pub/sybase/tools/xsybmon-0.7a.tar.Z [56K] _________________________________________________________________ Q4.12: SP_DOS _________________________________________________________________ /*>>>>>>>>>>>>>>>>>>>>>>>>>>> sp_dos <<<<<<<<<<<<<<<<<<<<<<<<<<<<<*/ IF OBJECT_ID('dbo.sp_dos') IS NOT NULL DROP PROCEDURE sp_dos GO CREATE PROCEDURE sp_dos @vcObjectName varchar(30) = NULL AS /*********************************************************************** * sp_dos - Display Object Scope * This procedure graphically displays the scope of a object in * the database. * * Copyright 1996, all rights reserved. * * Author: David W. Pledger, Strategic Data Systems, Inc. * * Parameters * ---------------------------------------------------------------- * Name In/Out Description * ---------------------------------------------------------------- * @vcObjectName In Mandatory - The exact name of a single * database object for which the call * hierarchy is to be extracted. * * Selected Data * A sample report follows: * ---------------------------------------------------------------- * * SCOPE OF EFFECT FOR OBJECT: ti_users * +------------------------------------------------------------------+ * (T) ti_users (Trigger on table 'users') * | * +--(P) pUT_GetError * | | * | +--(U) ui_error * | * +--(U) BGRP * | * +--(U) user_information (See Triggers: tu_user_information) * | * +--(U) users (See Triggers: ti_users, tu_users, td_users) * | * +--(P) pUT_LUDVersion * | * +--(P) pUT_GetError * | | * | +--(U) ui_error * | * +--(U) BGRP_LUDVersion * * * * Return Values * ---------------------------------------------------------------- * Value Description * ---------------------------------------------------------------- * < -99 Unexpected error - should never occur. * * -99 to -1 Sybase **reserved** return status values. * * 0 Execution succeeded * * 1 Execution of this procedure failed. * * > 1 Unexpected error - should never occur. * ***********************************************************************/ BEGIN /*------------------- Local Declarations -------------------------*/ DECLARE @iObjectID int /* System ID of object */ DECLARE @cObjectType char(1) /* System Object Type code */ DECLARE @vcName varchar(30) /* System Object name */ DECLARE @vcMsg varchar(255) /* Error Message if needed */ DECLARE @iInsTrigID int /* Insert Trigger ID */ DECLARE @iUpdTrigID int /* Update Trigger ID */ DECLARE @iDelTrigID int /* Delete Trigger ID */ DECLARE @vcErrMsg varchar(255) /* Error Message */ /* Local variables to facilitate descending the parent-child ** object hierarchy. */ DECLARE @iCurrent int /* Current node in the tree */ DECLARE @iRoot int /* The root node in the tree */ DECLARE @iLevel int /* The current level */ /* Local variables that contain the fragments of the text to ** be displayed while descending the hierarchy. */ DECLARE @iDotIndex int /* Index for locating periods */ DECLARE @cConnector char(3) /* '+--' */ DECLARE @cSibSpacer char(3) /* '| ' */ DECLARE @cBar char(1) /* '|' */ DECLARE @cSpacer char(3) /* ' ' */ DECLARE @cPrntStrng1 char(255) /* The first string to print */ DECLARE @cPrntStrng2 char(255) /* The second string to print */ DECLARE @iLoop int /* Temp var used for loop */ DECLARE @vcDepends varchar(255) /* Dependency String */ DECLARE @iDependsItem int /* Index to a string item */ /* Create a temporary table to handle the hierarchical ** decomposition of the task parent-child relationship. The Stack ** table keeps track of where we are while the leaf table keeps ** track of the leaf tasks which need to be performed. */ CREATE TABLE #Stack (iItem int, iLevel int) /*------------------- Validate Input Parameters --------------------*/ /* Make sure the table is local to the current database. */ IF (@vcObjectName LIKE "%.%.%") AND (SUBSTRING(@vcObjectName, 1, CHARINDEX(".", @vcObjectName) - 1) != DB_NAME()) GOTO ErrorNotLocal /* Now check to see that the object is in sysobjects. */ IF OBJECT_ID(@vcObjectName) IS NULL GOTO ErrorNotFound /* ---------------------- Initialization -------------------------*/ /* Do print any rowcounts while this is in progress. */ SET NOCOUNT ON /* Retrieve the object ID out of sysobjects */ SELECT @iObjectID = O.id, @cObjectType = O.type FROM sysobjects O WHERE O.name = @vcObjectName /* Make sure a job exists. */ IF NOT (@@rowcount = 1 and @@error = 0 and @iObjectID > 0) GOTO ErrorNotFound /* Initialize the print string pieces. */ SELECT @cConnector = "+--", @cSibSpacer = "|..", @cBar = "|", @cSpacer = "...", @cPrntStrng1 = "", @cPrntStrng2 = "" /* Print a separator line. */ PRINT " " PRINT "** Utility by David Pledger, Strategic Data Systems, Inc. **" PRINT "** PO Box 498, Springboro, OH 45066 **" PRINT " " PRINT " SCOPE OF EFFECT FOR OBJECT: %1!",@vcObjectName PRINT "+------------------------------------------------------------------+ " /* -------------------- Show the Hierarchy -----------------------*/ /* Find the root task for this job. The root task is the only task ** that has a parent task ID of null. */ SELECT @iRoot = @iObjectID /* Since there is a root task, we can assign the first ** stack value and assign it a level of one. */ SELECT @iCurrent = @iRoot, @iLevel = 1 /* Prime the stack with the root level. */ INSERT INTO #Stack values (@iCurrent, 1) /* As long as there are nodes which have not been visited ** within the tree, the level will be > 0. Continue until all ** nodes are visited. This outer loop descends the tree through ** the parent-child relationship of the nodes. */ WHILE (@iLevel > 0) BEGIN /* Do any nodes exist at the current level? If yes, process them. ** If no, then back out to the previous level. */ IF EXISTS (SELECT * FROM #Stack S WHERE S.iLevel = @iLevel) BEGIN /* Get the smallest numbered node at the current level. */ SELECT @iCurrent = min(S.iItem) FROM #Stack S WHERE S.iLevel = @iLevel /* Get the name and type of this node. */ SELECT @cObjectType = O.type, @vcName = O.name, @iInsTrigID = ISNULL(O.instrig, 0), @iUpdTrigID = ISNULL(O.updtrig, 0), @iDelTrigID = ISNULL(O.deltrig, 0) FROM sysobjects O WHERE O.id = @iCurrent /* * *=================================================* * * * Print out data for this node. (Consider * * * * making this a separate procedure.) * * * *=================================================* * */ /* Initialize the print strings to empty (different from NULL). ** @cPrntStrng1 is used to 'double space' the output and ** contains the necessary column connectors, but no data. ** @cPrntStrng2 contains the actual data at the end of the ** string. */ SELECT @cPrntStrng1 = "" SELECT @cPrntStrng2 = "" /* Level 1 is the root node level. All Jobs have a single ** root task. All other tasks are subordinate to this task. ** No job may have more than one root task. */ IF @iLevel = 1 BEGIN /* Print data for the root node. */ SELECT @cPrntStrng1 = "", @cPrntStrng2 = "(" + @cObjectType + ") " + @vcName END ELSE /* Else part of (IF @iLevel = 1) */ BEGIN /* Initialize loop variable to 2 since level one has ** already been processed for printing. */ SELECT @iLoop = 2 /* Look at the values on the stack at each level to ** determine which symbol should be inserted into the ** print string. */ WHILE @iLoop <= @iLevel BEGIN /* While the loop variable is less than the current ** level, add the appropriate spacer to line up ** the printed output. */ IF @iLoop < @iLevel BEGIN /* Is there a sibling (another node which exists ** at the same level) on the stack? If so, use ** one type of separator; otherwise, use another ** type of separator. */ IF EXISTS(SELECT * FROM #Stack WHERE iLevel = @iLoop) BEGIN SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cSibSpacer SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) + @cSibSpacer END ELSE BEGIN SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cSpacer SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) + @cSpacer END END ELSE /* Else part of (IF @iLoop < @iLevel) */ BEGIN SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cBar SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) + @cConnector + "(" + @cObjectType + ") " + @vcName END /* Increment the loop variable */ SELECT @iLoop = @iLoop + 1 END /* While @iLoop <= @iLevel */ END /* IF @iLevel = 1 */ /* Spaces are inserted into the string to separate the levels ** into columns in the printed output. Spaces, however, caused ** a number of problems when attempting to concatenate the ** two strings together. To perform the concatenation, the ** function rtrim was used to remove the end of the string. ** This also removed the spaces we just added. To aleviate ** this problem, we used a period (.) wherever there was ** supposed to be a space. Now that we are ready to print ** the line of text, we need to substitute real spaces ** wherever there is a period in the string. To do this, ** we simply look for periods and substitute spaces. This ** has to be done in a loop since there is no mechanism to ** make this substitution in the whole string at once. */ /* Find the first period. */ SELECT @iDotIndex = charindex (".", @cPrntStrng1) /* If a period exists, substitute a space for it and then ** find the next period. */ WHILE @iDotIndex > 0 BEGIN /* Substitute the space */ SELECT @cPrntStrng1 = stuff(@cPrntStrng1, @iDotIndex, 1, " ") /* Find the next. */ SELECT @iDotIndex = charindex (".", @cPrntStrng1) END /* Do the same thing for the second print string. */ SELECT @iDotIndex = charindex (".", @cPrntStrng2) WHILE @iDotIndex > 0 BEGIN SELECT @cPrntStrng2 = stuff(@cPrntStrng2, @iDotIndex, 1, " ") SELECT @iDotIndex = charindex (".", @cPrntStrng2) END SELECT @vcDepends = NULL IF @iInsTrigID > 0 SELECT @vcDepends = OBJECT_NAME(@iInsTrigID) + " (Insert)" IF @iUpdTrigID > 0 IF @vcDepends IS NULL SELECT @vcDepends = OBJECT_NAME(@iUpdTrigID) + " (Update)" ELSE SELECT @vcDepends = @vcDepends + ", " + OBJECT_NAME(@iUpdTrigID) + " (Update)" IF @iDelTrigID > 0 IF @vcDepends IS NULL SELECT @vcDepends = OBJECT_NAME(@iDelTrigID) + " (Delete)" ELSE SELECT @vcDepends = @vcDepends + ", " + OBJECT_NAME(@iDelTrigID) + " (Delete)" IF @vcDepends IS NOT NULL IF @cObjectType = "T" SELECT @cPrntStrng2 = @cPrntStrng2 + " (Trigger on table '" + @vcDepends + "')" ELSE SELECT @cPrntStrng2 = @cPrntStrng2 + " (See Triggers: " + @vcDepends + ")" /* Remove trailing blanks from the first print string. */ SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) /* Print the two strings. */ PRINT @cPrntStrng1 PRINT @cPrntStrng2 /* Remove the current entry from the stack (Pop) */ DELETE #Stack WHERE #Stack.iLevel = @iLevel AND #Stack.iItem = @iCurrent /* Add (push) to the stack all the children of the current ** node. */ INSERT INTO #Stack SELECT D.depid, @iLevel + 1 FROM sysdepends D WHERE D.id = @iCurrent /* If any were added, then we must descend another level. */ IF @@rowcount > 0 BEGIN SELECT @iLevel = @iLevel + 1 END END ELSE BEGIN /* We have reached a leaf node. Move back to the previous ** level and see what else is left to process. */ SELECT @iLevel = @iLevel - 1 END END /* While (@iLevel > 0) */ PRINT " " RETURN (0) /*------------------------ Error Handling --------------------------*/ ErrorNotLocal: /* 17460, Table must be in the current database. */ EXEC sp_getmessage 17460, @vcErrMsg OUT PRINT @vcErrMsg RETURN (1) ErrorNotFound: /* 17461, Table is not in this database. */ EXEC sp_getmessage 17461, @vcErrMsg OUT PRINT @vcErrMsg PRINT " " PRINT "Local object types and objecs are:" SELECT "Object Type" = type, "Object Name" = name FROM sysobjects WHERE type IN ("U","TR","P","V") ORDER BY type, name RETURN (1) END GO grant execute on sp_dos to public go _________________________________________________________________ _________________________________________________________________ Q4.13: SQSH, RELEASE 1.0 _Last Update: 22-May-1996 20:45:00 EST_ _________________________________________________________________ Sybase-FAQ Notice You are currently reading a special Sybase-FAQified version of my home page. I will attempt to keep it as up-to-date as possible, however there is a chance that it may lag somewhat behind my personal page (http://www.voicenet.com/~gray/sqsh.html). Also, this version has been stripped of changelog and status information in order to shorten it up a bit for the plain-text version of the FAQ. What is SQSH? Sqsh (pronounced skwish) is short for SQshelL (pronounced s-q-shell), it is intended as a replacement for the venerable 'isql' program supplied by Sybase. It came about due to years of frustration of trying to do real work with a program that was never meant to perform real work. Sqsh is much more than a nice prompt (a la 'dsql', from David B. Joyner), it is intended to provide much of the functionality provided by a good shell, such as variables, redirection, pipes, back-grounding, job control, history, command completion, and dynamic configuration. Also, as a by-product of the design, it is remarkably easy to extend and add functionality. Join the sqsh mailing list Thanks to the tireless efforts of Pablo Sanchez (our fearless maintainer of the Sybase FAQ), and the excellent people at SGI, the sqsh mailing list is now available for use! Keep track of the latest developments. Offer suggestions for future additions. Offer me money (grin). To subscribe, send mail to external-majordomo@postofc.corp.sgi.com With a _body_ of: subscribe sqsh-users [optional e-mail address] Or with a _body_ of just: help for a list of commands. Once subscribed you may send mail to the sqsh-users mailing list by addressing your mail to: sqsh-users@postofc.corp.sgi.com Availability Sqsh may be found on the following sites: * http://www.voicenet.com/~gray/sqsh-1.0.tar.gz * ftp://poseidon.csci.unt.edu/pub/sqsh * ftp://ftp.netcom.com/pub/he/heyjude/gray * ftp://davox2.davox.com/pub/sqsh Keep in mind that sometimes the different sites become out of sync, so at times the latest version may be be available at one of them. If you are wondering what the funny '.gz' extension is on the end of some of the files, I highly recommend that you grab a copy of ftp://prep.ai.mit.edu/pub/gnu/gzip-1.2.4.tar or you can get a regular UNIX compressed version http://www.voicenet.com/~gray/sqsh-1.0.tar.Z. I also try to keep around the previous release http://www.voicenet.com/~gray/sqsh-0.8.tar.gz, just in case I royally screw up the current release (which could happen). If you have trouble reaching any of the sites above, you can send me e-mail at gray@voicenet.com, I am typically pretty good about responding. Licensing 99% of the software that I use is free, therefore I like to give back in kind. Sqsh is held under the GNU General Public License (GPL) and therefore may be freely distributed under the terms of this license. Building and Installing Refer to the INSTALL file for directions on installation, And, once again thank the GNU folk for the wonderful GNU Autoconf automated configuration system. Commands Sqsh provides all commands provided by isql (such as go, reset, etc.)-- which wasn't hard, there aren't many of them--along with a large base of extended commands. Typically all commands in sqsh are prefixed with a '\' to avoid collision with the TSQL syntax. For example: 1> \help Available commands: \abort \alias \buf-append \buf-copy \buf-edit \buf-get \buf-load \buf-save \buf-show \connect \done \echo \exit \go \help \history \jobs \kill \loop \quit \read \reconnect \redraw \reset \set \shell \show \sleep \unalias \wait \warranty Use '\help command' for more details However, for those of you that just can't stand the '\', all commands may be aliased to any other name that you wish via the '\alias' command. Variables Variables are provided in sqsh as both a mechanism for storing and retrieving frequently used information as well as a method for configuring the behavior of the shell. For example: 1> \set table_name="syscolumns" 1> select "Count" = count(*) from $table_name 2> go Count ----------- 1123 (1 row affected) And: 1> \set table_name="syscolumns" 1> \set headers=off 1> select count(*) from $table_name 2> go 1123 (1 row affected) All variables are documented in-depth in the manual page for sqsh, and the expansion of variable within the SQL command buffer may be turned off and on as desired (via the $expand variable). Redirection & Pipes How many times have you watched a result set dissapear from your screen because you didn't hit ^S fast enough? Well, no more. Now, any command available in sqsh may be redirected to/from a file or pipelined to another process. For example, it is now legal to type: 1> select * from sysobjects 2> go | grep test | more You may also redirect output to files and (if you are careful) can redirect input from files: 1> select * from sysobjects 2> go 2>/dev/null >/tmp/objects.txt Backgrounding & Job Control Suppose you want to run a long complex query and continue to work while waiting for the results. With isql, the most effective way to do this was to run two copies of isql. With sqsh you can now do: 1> select ... /* big nasty select */ 2> go & Job #1 started 1> After typing 'go &', sqsh launches a child process, which reconnects to the database and performs the desired query. This is similar to job control within a standard shell except that, by default, in sqsh the background job's output will be deferred until the job completes. So when the big nasty query, above, completes you will see a message like: 1> sp_helptext .... Job #1 completed (output pending) 2> and to show the output of the job you can do: 1> \show 1 | more Once again, the behavior of output deferral may be turned on and off via the $defer_bg variable. Sqsh also provides the commonly used job control commands available in such shells as csh and bash, such as \jobs (to display running jobs) and \kill (to terminate jobs). Command Substitution With the 1.0 release, sqsh is slowly beginning to look more-and-more like a real shell with the addition of command substitution. This feature allows a UNIX command to substituted anywhere within a sqsh command or within a SQL batch simply by placing the command within backquotes (or ` -- this may not come out to be a backquote depending on which font your web browser is using). For example: 1> select count(*) from `echo syscolumns` 2> go | `echo more` Currently, sqsh allows a multi-line command within a SQL batch, however this is not support for command line functions as of yet. For example you can do: 1> select count(*) from `echo 2> syscolumns` 3> go Whereas you _cannot_ do: 1> select count(*) from syscolumns 3> go | `echo more` Hopefully, in the near future I'll make sqsh smart enough to support line-continuations with sqsh commands. Believe it or not, it isn't that easy to do. In-line Go As of release 0.5, sqsh now supports a form of in-line go, via a ; placed anywhere within the current line, such as: 1> sp_who ; And, anything that can follow the "go" command may also follow the inline ; 1> sp_who ; | more Sqsh even attempts to be relatively smart, and ignores semicolons found within single or double quotes of a single command, although it currently does deal with semicolons located in comments. Note, in order to turn this feature on, execute: 1> \set semicolon_hack=1 History Sqsh provides two methods for history control, line-by-line history using either vi or emacs styles (via ftp://prep.ai.mit.edu/pub/gnu/readline-2.0.tar.gz), it also provides batch history, so that entire statements may be re-run or edited: 1> \history ... (12) select name, id from syscolumns where name like "%$name%" (13) select distinct title, type from titles where title in (select title from titles, titleauthor, authors where titles.title_id = titleauthor.title_id and authors.state = "CA") .. Most commands support a csh-style reference to history entries via '!!', or '!n'. 1> \vi !! More Information If you still aren't convinced, and would like more information, see http://www.voicenet.com/~gray/sqsh.1.html (thanks to man2html, by Earl Hood). Who am I? Scott C. Gray ( gray@voicenet.com, sgray@sybase.com ). Q4.14: SP_GETDAYS _________________________________________________________________ use master go drop proc sp_getdays go create procedure getdays @days int OUTPUT, @date datetime=NULL as declare @m int, @y int if (@date is NULL) select @date = getdate() select @m = datepart(mm, @date) if (@m = 2) begin select @y = datepart(yy, @date) if (@y % 4 = 0) and ((@y % 100 != 0) or (@y % 400 = 0)) select @days = 29 else select @days = 28 end else begin if (@m > 7) select @m = @m - 7 select @days = (30 + (@m & 1)) end return (1) go grant execute on sp_getdays to public go _________________________________________________________________ Q4.15: ddl_insert.pl ---------------------------------------------------------------------------- In order to use this script you must have Sybperl installed -- see Q4.4 for more information. #!/usr/local/bin/perl # Author: Vincent Yin (umyin@mctrf.mb.ca) Aug 1994 Last Modified: May 1996 chomp($basename = `basename $0`); $usage = <', analyses the data and table structure, then prints out a bunch of insert values ( ... ) statements that would re-populate the table. It's an alternative to `bcp'. `bcp' has its limitations (e.g. one often needs to turn on 'select into/bulk copy' option in the database before running bcp.) Table names are matched to with Transact-SQL's LIKE clause. When more than one pattern is specified on command line, the LIKE clauses are OR'ed. In any case, the LIKE clause(s) is logged to the beginning of the output as a comment, so that you'll see how this program interprets the command line. The SQL script is printed to stdout. Since it only prints out the SQL but doesn't submit it to the SQL server, this procedure is safe to run. It doesn't modify database in any way. EXAMPLES To print this usage page: % $basename To print SQL that populates the table master..sysobjects and systypes: % $basename master userid passwd 'sysobjects' 'systypes' To print SQL that populates all system tables in master db: % $basename master userid passwd 'sys%' BUGS Embedded line breaks in strings are allowed in Sybase's isql, but not allowed in SQLAnywhere's isql. So this script converts embedded line breaks (both DOS styled and UNIX styled) to blank characters. EOF $batchsize = 10; # The number of INSERTs before a `go' is issued. # This is to make the output compact. # .................... No change needed below this line ........................ use Sybase::DBlib; die $usage unless $#ARGV >= 3; ($db, $user, $passwd, @pattern) = @ARGV; $likeclause = &sql_pattern_to_like_clause('name', @pattern); print <{dbNullIsUndef} = 1; $dbh->dbuse($db); # Get the list of tables. $tablelist = $dbh->sql("select name from sysobjects where type in ('S','U') and $likeclause order by name "); foreach $tableref (@$tablelist) { $table = @$tableref[0]; print "\n\n/*.............. $table ...............*/\n"; print "-- ", `date`, "\n"; print "declare \@d datetime\n"; print "select \@d = getdate()\n"; print "print ' %1! $table', \@d\ngo\n\n"; print "truncate table $table -- Lookout !!!!!!\ngo\n\n"; $dbh->dbcmd("select * from $table"); $dbh->dbsqlexec; $dbh->dbresults; while (@row = $dbh->dbnextrow()) { print "insert $table values("; for ($i=0; $i <= $#row; $i++) { # build the INSERT statement # Analyse datatype to decide if this column needs to be quoted. $coltype = $dbh->dbcoltype($i+1); if (!defined($row[$i])) { print 'NULL'; # Never quote NULL regardless of datatype } elsif ($coltype==35 or $coltype==39 or $coltype==47 or $coltype==58 or $coltype==61 or $coltype==111 ){ # See systypes.type/name for explanation of $coltype. $row[$i] =~ s/\r|\n/ /g; # Handles both DOS and UNIX line breaks $row[$i] =~ s/"/""/g; # Stuff double quotes print "\"" . $row[$i] . "\""; } else { print $row[$i]; } print ", " unless $i == $#row; } print ")\n"; # wrap up the INSERT statement. # print `go' at every $batchsize interval. print "go\n" unless $dbh->DBCURROW % $batchsize; } print "\ngo\n\n"; # print a `go' after the entire table is done. print "-- ### End for $table: rowcount = ", $dbh->DBCURROW, "\n"; } # ................................. sub ........................................ sub main'sql_pattern_to_like_clause { local($field_name, @pattern) = @_; $like_clause = "\t( 1 = 0 "; foreach (@pattern) { $like_clause .= "\n or $field_name like '" . $_ . "' "; } $like_clause .= "\n\t) \n"; } ---------------------------------------------------------------------------- Q4.16: SP_EXAM_SPACE _________________________________________________________________ use master go if exists (select * from sysobjects where type = "P" and name ="sp_exam_space") drop procedure sp_exam_space go CREATE Procedure sp_exam_space as -- Purpose: To profile the space used by tables and indexes in a database. -- Copyright (c) 1996 Robert L. Munson -- Permission granted to Sybase Open Line on Compuserve to distribute -- Permission granted for individual non commercial use. -- All other rights reserved. -- Written by Bob Munson for the Sybase Community -- Dedicated to Unicef, the United Nations Childrens Fund 1-800-FOR-KIDS -- If you find sp_exam_space of value, I invite your support of Unicef. declare @tot_rows_no int declare @tot_data_pgs_no int declare @tot_indx_pgs_no int /* Create a table to contain the report */ CREATE Table #report(section varchar(30), /* Major section of the report */ seq int, /* Line number within the section */ text varchar(120) /* Report content */ ) CREATE Clustered Index report_pkix on #report(section,seq) /* Header Section */ Insert Into #report(section, seq, text) Select "aaa",-4, "sp_exam_space Report of " + isnull(@@servername,"-- Not Named --") +"."+db_name() + " on " + convert(char(13),getdate(),107) Insert Into #report(section, seq, text) Select "aaa",-1, @@version /* total space used in database */ Insert Into #report values("abc",-2," ") Insert Into #report Select "abc",-1,"Total database space" /* Number of segments */ Insert Into #report values("are",-2," ") Insert Into #report values("are",-1,"Segments - from syssegments") Insert Into #report select "are",segment, " Segment: " + convert(varchar,segment) + " " + name from syssegments /* Number of objects in each segment */ Insert Into #report values("bak",-2," ") Insert Into #report values("bak",-1,"Number of objects, rows and pages in each segment") Insert Into #report Select "bak", segment, " segment " + convert(varchar,segment) + " " + convert(varchar,count(*)) + " objects " + convert(varchar,sum(rowcnt(doampg))) + " rows " + convert(varchar,sum(reserved_pgs(id,doampg))) + " data reserved pages " + convert(varchar,sum(reserved_pgs(id,ioampg))) + " Index Reserved Pages" from sysindexes group by segment /* Megs in each segment */ /* Space for data, space for clustered indexes, space for indexes, space for chained */ insert into #report values("car",-2, " ") Insert Into #report Values("car",-1,"Distribution of space by indid - from sysindexes") Insert Into #report Select "car", 1-abs(sign(indid-1)) + 2*sign(1+sign(indid-2))*sign(1-sign(indid-254)) + 255*(1-abs(sign(indid-255))), " " + convert(varchar,1-abs(sign(indid-1)) + 2*sign(1+sign(indid-2))*sign(1-sign(indid-254)) + 255*(1-abs(sign(indid-255)))) + " " + convert(varchar,sum(reserved_pgs(id,doampg)) + sign(indid-1)*sum(reserved_pgs(id,ioampg))) + " pages " + convert(varchar,sum(reserved_pgs(id,doampg)) + sign(indid-1)*sum(reserved_pgs(id,ioampg))/512.0) + " Megs " from sysindexes group by 1-abs(sign(indid-1)) + 2*sign(1+sign(indid-2))*sign(1-sign(indid-254)) + 255*(1-abs(sign(indid-255))),sign(indid-1) create table #dist(log_no int, count_no int, rows_no int null,pgs_no int null) Select @tot_rows_no=sum(rowcnt(doampg))/100, @tot_data_pgs_no = sum(reserved_pgs(id,doampg))/100 from sysindexes where indid <=1 and id >=100 Select @tot_indx_pgs_no = sum(reserved_pgs(id,ioampg))/100 from sysindexes where indid >=2 and indid <250 and id >=100 /* Distribution of #rows in tables */ Insert Into #report values("dar",-2, " ") Insert Into #report values("dar",-1, "Distribution of Rows - from sysindexes (excluding system tables)") insert into #dist select charindex(substring(convert(varchar,rowcnt(doampg)),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,rowcnt(doampg))) , count(*), sum(rowcnt(doampg)),sum(reserved_pgs(id,doampg)) from sysindexes where indid <=1 and id >=100 group by charindex(substring(convert(varchar,rowcnt(doampg)),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,rowcnt(doampg))) Insert Into #report Select "dar",log_no, " " +convert(varchar,count_no) + " Tables have "+ substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) + replicate("0",(log_no-1)/3) + " <= #rows <= " + substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) + replicate("9",(log_no-1)/3) + " Totals: Rows=" + convert(varchar,rows_no) + " (" + convert(varchar,(rows_no/@tot_rows_no)) + "%)" + " Pages=" + convert(varchar,pgs_no) + " (" + convert(varchar,(pgs_no/@tot_data_pgs_no)) + "%)" + " Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0)) from #dist /* distribution of rows per page */ /* Distribution of pages for data */ truncate table #dist Insert Into #report values("dba",-2, " ") Insert Into #report values("dba",-1, "Distribution of Pages - from sysindexes (excluding system tables)") insert into #dist select charindex(substring(convert(varchar,reserved_pgs(id,doampg)),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,reserved_pgs(id,doampg))) , count(*), sum(rowcnt(doampg)),sum(reserved_pgs(id,doampg)) from sysindexes where indid <=1 and id >=100 group by charindex(substring(convert(varchar,reserved_pgs(id,doampg)),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,reserved_pgs(id,doampg))) Insert Into #report Select "dba",log_no, " " + convert(varchar,count_no) +" Tables Have " + substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) + replicate("0",(log_no-1)/3) + " <=#pages<= " + substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) + replicate("9",(log_no-1)/3) + " Totals: Rows=" + convert(varchar,rows_no) + " (" + convert(varchar,(rows_no/@tot_rows_no)) + "%)" +" Pages=" + convert(varchar,pgs_no) + " (" + convert(varchar,(pgs_no/@tot_data_pgs_no)) + "%)" + " Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0)) from #dist /* distribution of rows in indexes */ truncate table #dist Insert Into #report values("dmv",-2, " ") Insert Into #report values("dmv",-1, "Distribution of Rows for indexes - from sysindexes (excluding system tables)") insert into #dist select charindex(substring(convert(varchar,rowcnt(datarows.doampg)),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,rowcnt(datarows.doampg))) , count(*), sum(rowcnt(datarows.doampg)),sum(reserved_pgs(ndx.id,ndx.ioampg)) from sysindexes ndx,sysindexes datarows where ndx.indid >=2 and ndx.indid <=250 and ndx.id >=100 and ndx.id=datarows.id and datarows.indid <=1 and datarows.id>=100 group by charindex(substring(convert(varchar,rowcnt(datarows.doampg)),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,rowcnt(datarows.doampg))) Insert Into #report Select "dmv",log_no, " " + convert(varchar,count_no) +" Indexes Have " + substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) + replicate("0",(log_no-1)/3) + " <=#rows<= " + substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) + replicate("9",(log_no-1)/3) + " Totals: Rows=" + convert(varchar,rows_no) + "(" + convert(varchar,(rows_no/@tot_rows_no)) + "%)" + " Pages=" + convert(varchar,pgs_no) + " (" + convert(varchar,(pgs_no/@tot_indx_pgs_no)) + "%)" + " Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0)) from #dist /* Distribution of length of rows */ insert Into #report values("ept",-2, " ") Insert Into #report Values("ept",-1, "Distribution of minimum length of rows - from sysindexes (excluding system tables)") truncate table #dist Insert Into #dist select charindex(substring(convert(varchar,minlen),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,minlen)) , count(*), sum(rowcnt(doampg)),sum(reserved_pgs(id,doampg)) from sysindexes where indid <=1 group by charindex(substring(convert(varchar,minlen),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,minlen)) Insert into #report Select "ept",log_no, " " + convert(varchar,count_no) + " Tables Have " +substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) + replicate("0",(log_no-1)/3) + " <=minlen <= " + substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) + replicate("9",(log_no-1)/3) + " Totals: Rows=" + convert(varchar,rows_no) + " (" + convert(varchar,(rows_no/@tot_rows_no)) + "%)" + " Pages=" + convert(varchar,pgs_no) + " (" + convert(varchar,(pgs_no/@tot_data_pgs_no)) + "%)" + " Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0)) from #dist Insert Into #report Values("fbi",-2, " ") Insert Into #report Values("fbi",-1,"Distribution of maxlen of rows - from sysindexes (excluding system tables)") truncate table #dist Insert Into #dist select charindex(substring(convert(varchar,maxlen),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,maxlen)) , count(*),sum(rowcnt(doampg)),sum(reserved_pgs(id,doampg)) from sysindexes where indid <=1 and id>=100 group by charindex(substring(convert(varchar,maxlen),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,maxlen)) Insert Into #report Select "fbi",log_no, " " + convert(varchar,count_no) +" Tables Have " + substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) + replicate("0",(log_no-1)/3) + " <=maxlen<= " + substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) + replicate("9",(log_no-1)/3) + " Totals: Rows=" + convert(varchar,rows_no) +" (" + convert(varchar,(rows_no/@tot_rows_no)) + "%)" + " Pages=" + convert(varchar,pgs_no) +" (" + convert(varchar,(pgs_no/@tot_data_pgs_no)) + "%)" + " Megs=" + convert(varchar,convert(numeric(7,2),pgs_no/512.0)) from #dist /* distribution of sum of the length of columns in the tables */ Insert Into #report Values("fda",-2," ") Insert Into #report Values("fda",-1,"Distribution of sum of column lengths") /* sysprocedures - number of rows */ Insert Into #report Values("gao",-2," ") Insert Into #report Values("gao",-1,"Distribution of rows in sysprocedures") create table #prep_dist(id_no int, type_no int, count_no int) insert into #prep_dist Select id,type,count(*) from sysprocedures group by id,type truncate table #dist Insert Into #dist(log_no,count_no) select charindex(substring(convert(varchar,count_no),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,count_no)) , count(*) from #prep_dist group by charindex(substring(convert(varchar,count_no),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,count_no)) Insert Into #report Select "gao",log_no, " " + convert(varchar,count_no) + " Procs Have " + substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) + replicate("0",(log_no-1)/3) + " <= #rows <= " + substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) + replicate("9",(log_no-1)/3) from #dist /* syscomments - number of rows */ Insert Into #report Values("hal",-2, " ") Insert Into #report Values("hal",-1,"Distribution of rows in syscomments") truncate table #prep_dist insert into #prep_dist Select id,1,count(*) from syscomments group by id truncate table #dist Insert Into #dist(log_no,count_no) select charindex(substring(convert(varchar,count_no),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,count_no)) , count(*) from #prep_dist group by charindex(substring(convert(varchar,count_no),1,1), "0000111112344456789")/5 -3 + 3*datalength(convert(varchar,count_no)) Insert Into #report Select "hal",log_no, " " + convert(varchar,count_no) + " Objects Have " + substring("0125",sign(log_no)*((log_no +2)%3 + 1)+1,1) + replicate("0",(log_no-1)/3) + " <= #rows <= " + substring("0149",sign(log_no)*((log_no+2)%3 +1)+1,1) + replicate("9",(log_no-1)/3) from #dist /* Generate the report */ Select "sp_exam_space Report"=text from #report order by section, seq go grant execute on sp_exam_space to public go _________________________________________________________________ -- Pablo Sanchez | Ph # (415) 933.3812 Fax # (415) 933.2821 pablo@sgi.com | Pg # (800) 930.5635 -or- pablo_p@corp.sgi.com =============================================================================== I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ] ---------------------------------------------------------------------- Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!news.mel.connect.com.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!gatech!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo From: pablo@sgi.com (Pablo Sanchez) Newsgroups: comp.databases.sybase,comp.answers,news.answers Subject: Sybase FAQ: 6/8 - section 5 Supersedes: Followup-To: comp.databases.sybase Date: 1 Jul 1996 14:32:22 GMT Organization: Silicon Graphics, Inc. Nederland, CO. USA Lines: 12424 Approved: news-answers-request@MIT.EDU Message-ID: References: Reply-To: pablo@sgi.com NNTP-Posting-Host: mew.corp.sgi.com Summary: Info about SQL Server, bcp, isql and other goodies Posting-Frequency: monthly Originator: pablo@mew.corp.sgi.com Xref: news1.ucsd.edu comp.databases.sybase:29425 comp.answers:15547 news.answers:62018 Archive-name: databases/sybase-faq/part6 URL: http://reality.sgi.com/pablo/Sybase_FAQ Q5.1: TECHNICAL NEWS Volume 3, Number 2 May, 1994 _________________________________________________________________ Disclaimer: No express or implied warranty is made by SYBASE or its subsidiaries with regard to any recommendations or information presented in SYBASE Technical News. SYBASE and its subsidiaries hereby disclaim any and all such warranties, including without limitation any implied warranty of merchantability of fitness for a particular purpose. In no event will SYBASE or its subsidiaries be liable for damages of any kind resulting from use of any recommendations or information provided herein, including without limitation loss of profits, loss or inaccuracy of data, or indirect special incidental or consequential damages. Each user assumes the entire risk of acting on or utilizing any item herein including the entire cost of all necessary remedies. Staff Principal Editor: Leigh Ann Hussey Contributing Writers: John Blair, James Gath, Karen Hogoboom, Jeff Lichtman, Steve Olson, Benjamin von Ullrich, Elton Wildermuth, Sybase Engineering, Sybase Tech Support Send comments and suggestions to: SYBASE Technical News 6475 Christie Avenue Emeryville, CA 94608 This issue of the SYBASE Technical News contains new information about your SYBASE software. If needed, duplicate this newsletter and distribute it to others in your organization. Keep this newsletter with your SYBASE Troubleshooting Guide. Get Certified! A new program has been recently announced which will allow Sybase customers, employees, partners and others to demonstrate official competence in the SYBASE architecture and product set. The Certified SYBASE Professional (CSP) Program is targeted at client/server architects, designers, developers, systems and database administrators and support engineers. The first CSP certification is the Certified Sybase DBA. This certification program will be beta tested in North America and the UK in January. Other likely certifications will include Open Interfaces Developer, Application Developer and Architect. Professional Services is working with Customer Services and Support to ensure that Sybase offers a single integrated certification program for both development and support professionals. For more information contact Wendy Washington at Sybase, 510-922-0959, and ask for the CSP Brochure. Troubleshooting Guide Update Troubleshooting Guide Goes Online The new version of the Sybase Troubleshooting Guide is on its way! It includes over 60 new error messages, as well as new and revised material on many Sybase products. Due to the increasing amount of information, it now comes in two volumes: a Server volumen, and a Tools and Connectivity volume. The new Troubleshooting Guide will be available in both paper and electronic form. Both volumes will be included on the AnswerBase CD-ROM support product. The Server volume will be included on the first SyBooks CD-ROM publications product. Both AnswerBase and SyBooks CDs will be available in Q3/Q4. Mass updates of future versions of the Troubleshooting Guide will be provided free of charge in AnswerBase and SyBooks formats. Paper manuals will still be available for purchase (or print your own from the electronic version!). Getting Your Troubleshooting Guide Feedback The goal of the Sybase Troubleshooting Guide is to help you better use (and support) Sybase products, and to do so more self-sufficiently. To accomplish this, we need your feedback. To this end, a mail alias called tsg has been established. The intention of this alias is to allow Sybase customers and employees to comment on the Troubleshooting Guide by mailing tsg@sybase.com with: * Corrections * Requests for specific additions * Additions (i.e., written material) * Comments about which sections are particularly helpful * Comments about which sections are not clear * Any other input you might have tsg will not be a forum for answering questions best taken to Technical Support, but will be your chance to make the Troubleshooting Guide more useful for everyone who uses it. The next issue of the Troubleshooting Guide will be slightly different from previous issues. It will come in two volumes, a Server volume, and a Connectivity volume, and will be released in both hardcopy and electronic versions (Answerbase and SyBooks). Sybase Support Publications is considering releasing future issues of the Troubleshooting Guide only in electronic format; customers are requested to mail tsg@sybase.com to give us feedback on this topic. 803 Error Installing from CD-ROM Numerous customers have called about receiving 803 errors when they try to install SYBASE products from CD-ROM. Here are the complete instructions for such installations; please disseminate this information as widely as you wish. Step 1: Login as root. Step 2: Mount the CD as a filesystem in a UNIX machine. The exact mount command differs between platforms. In the following examples, * /dev/sr0 stands for your CD-ROM device name. * /cdrom is a local directory you have created. * mount usually exists in the /etc directory. You may have to cd /etc before you can issue the command. Sun 4 # mount -rt hsfs /dev/sr0 /cdrom Sun_SVR4 (Solaris) # mount -o ro -F hsfs /dev/sr0 /cdrom _DEC AXP OSF_ # mount -rt cdfs -o noversion /dev/sr0 /cdrom Be sure that the directory is writable by the world, and large enough for the install to complete! Don't forget to log out as root before continuing installation. SYBASE software should be installed by the sybase user in order for permissions to be set correctly. Step 3: After you have mounted the CD, go to the directory in which it is mounted. Step 4: Run sybload -D, which is the disk-install version: % cd /cdrom % ls sybload sybimage % ./sybload -D sybload is an executable; sybimage is the global file containing the suite of products. sybload will prompt you for the file used for disk installation. Step 5: Give it the name /cdrom/sybimage. From there onward, the process is the same as installation from tape. Hanging, Sleeping, and the "Zombie" Process What are the different states of SLEEP? When all processes are shown as SLEEPING by sp_who except the one which issued the sp_who command, how can a user tell the difference between hanging versus running processes? What is a "zombie" process and how can it be dealt with? Definitions In pre-4.9.2 SQL Servers, the output of sp_who could be difficult to interpret. Processes showed only one type of SLEEP status, "sleeping". In System 10, and 4.9.2 Rollup 2115 and above, sp_who shows four types of sleep along with the other possible statuses: Value Meaning ----- ------- infected The server erred with a stack trace, and the process got an error that would normally kill it. The process is infected instead of killed. background This process is in the background. recv sleep The process is waiting for input from the client. send sleep The process is waiting for a write to the client to complete. alarm sleep The process is waiting for an alarm (usually means the process is waiting for a waitfor command to complete). lock sleep The process is waiting for a lock to be granted. sleeping The process is waiting for something not listed above. This is "normal" sleep. runnable The process is not waiting for anything, and is ready to run, but is not the currently running process. running The process is currently running (in a multiprocessing system, there can be more than one such process). stopped The process is stopped. In ancient history (before version 4.0.1), all processes stopped during a checkpoint. Now the only time a process is in the stopped state is when someone is using the kill command on it. bad status There is a bad value for the status of this process. In uniprocessor hardware there can be only one process RUNNING and all other processes are either SLEEPING or RUNNABLE. The next RUNNABLE process gets scheduled to run after sp_who finishes. Processes sleep for certain events like disk I/O, network I/O, alarm, etc. If all the threads are shown as SLEEPING, at least one of them will become RUNNABLE after an event on which the thread is waiting. On a multi-processor machine, if more than one SQL Server engine is started, you can see more than one thread in the RUNNING state. The number of processes running can not exceed the number of SQL engines running. It is not possible to find out from sp_who output which client process is hung waiting for Server response. But it is possible to find out if any process (i.e. thread) is blocked by another by looking at the "blk" field of sp_who. For more details please refer to the Commands Reference Manual. Before System 10 -- Night of the Zombie Process Pre-System 10 SQL Servers can end up with "zombie" (unkillable hanging) processes if the event on which a thread is sleeping never happens. In this case, the thread does not run and cannot be killed. This anomaly existed right from the first release of 4.0 SQL Server until a recent Rollup of 4.9.2 (2115 and above). The problem is that the SQL Server scheduler is non-preemptive. This means that tasks cannot be put to sleep or woken up arbitrarily by the SQL Server scheduler; all task context switching is done voluntarily by running tasks. Pre-System 10 SQL Servers handle attention through a signal handler set up to catch OUT-OF-BAND data which sets a status bit in the PSS (Process Status Structure). This is an asynchronous event. For example: a task is about to go to sleep waiting for input, but the client cancels the query with dbcancel(). If the signal handler sets the bit between the time the task is going to sleep and the time it is actually put to sleep, then the server task sleeps forever waiting for the client to send some data, and the client sleeps waiting for the server to acknowledge the cancel request. This is the well-known "dbcancel problem." Another source of trouble can be a DBMS task in the Server which is sleeping on a network I/O from a client that just isn't there any more (maybe because somebody rebooted the PC on which the front end was running). This kind of task cannot be killed because: * The task must be in RUNNABLE state so that the scheduler can kill the task the next time it runs. * The task cannot be preempted because its state is unknown. To complicate the above scenario, if the eternally-sleeping task started a transaction, it may potentially hold locks on different pages. The only solution for older versions is to reboot the SQL Server. A Wooden Stake for the Zombie Process As of the 10.0 SQL Server, and 4.9.2 SQL Server Rollup 2115 and above, zombie processes can now be killed. The new kill command not only sets the bit in the PSS as it used to, but also wakes up the task if it determines that the task is sleeping in one of four states: * waiting to receive something from the client, a common state _(RECV SLEEP)_ * waiting for a send to be completed by the network service task _(SEND SLEEP)_ * waiting on an alarm because user did a waitfor delay command _(ALARM SLEEP)_ * waiting on a lock (resource, logical, semaphore, etc.) (LOCK SLEEP) This means that any task can be cleaned up properly as if an exception has occurred while the task was running, provided the task is in one of the RECV, SEND, LOCK and ALARM sleep states. The new kill command can kill infected processes as well, also a new feature. The kill command can almost instantaneously kill a task that is sleeping on any one of the events except the fifth state: normal sleep (where the task is waiting for a resource to post network or disk I/O). This was true for older versions of SQL Server, and is still true. The reason for this is that all sleeps except "normal sleep" have well-known and well-understood backout paths; however, tasks sleeping on resources have a variety of different backout paths. The new kill command will: * set the "kill yourself" bit on the task * wake up the task normally * put the task into the runnable queue When the scheduler is ready to run the task it finds the "kill yourself" bit and aborts the task. For tasks that are in normal sleep or for running tasks, the new kill command acts exactly as the old kill command: it sets the kill bit in the PSS and it is up to the task to wake up and test the bit and decide to kill itself. Note that this means that the new kill command may not have an effect on all tasks. NOTE! If a killed task is in the midst of a transaction, the entire transaction will abort. All resource cleanup will occur in the task backout path so that no inconsistent resources are left hanging around that might cause the SQL Server to hang in a hibernating state and eventually have to be rebooted. There were regressions, caused by the new kill command's original implementation, which could cause the server to hang (bug 51270) or not completely kill the process under certain conditions (bug 48964). These bugs were fixed as of Rollup 2359, and can be ordered from Tech Support. This fix is not available on the SQL Server NLM release for Netware. Instead, Netware sites must use a different method for avoiding zombie processes. How to Eliminate Zombie Processes on SQL Server NLM To eliminate Zombie processes from the Novell SQL Server: 1. Download from Compuserv, from the NOVLIB forum (Currently in forum 1) the STRTLI.EXE file. 2. Execute the file STRTLI.EXE - this expands to 8 NLMs and 2 documents. The NLMs are: STREAMS.NLM, TLI.NLM, SPXS.NLM, SPXLISFIX.NLM, SPXFSFIX.NLM, SPXSIX2.NLM, IPXS.NLM, and PATCHMAN.NLM. The documents are: STRTLI.DOC and PCHMN230.DOC. 3. STRTLI.DOC contains instructions on how to load the files. Load 4.2.2 of the NLM SQL Server first, then the new files from Novell. If you load SNLMINST after loading the new files, you will have written over several of the new files and will need to reload them. DECnet Errors and Hanging Processes This past spring, a customer running the 4.8 version of SQL Server for VMS encountered a problem with hanging/sleeping server processes left after network/router errors caused disconnects. A change in the AST networking strategy for DECnet solved this problem (bug 40459 on VMS, bug 38156 on AXP). The behavior exhibited was that users would switch off their PC front ends without properly logging out of connections to the SQL Server. The Server would not deassign the channel and delete the mailbox on error exit from its connection accepting system call. The customer's router and the design of the client application caused so many errors that the Server ran out of I/O channels. Sybase Engineering modified the VMS SQL Server so that it now deassigns the channel and deletes the mailbox after rejecting the connection. This change, originally made in the System 10 release of SQL Server, was back-ported to the 4.9.x codeline and is available as part of the latest Rollup. The Problem of Null Column Names How does one refer, if at all, to an undefined column name resulting from a select into of a built-in function? In 10.0 SQL Servers, NULL column names are not allowed (see last issue's Special Supplement). In earlier SQL Servers, when a name is defined as NULL in syscolumns the following situation ensues: 1> select title_id, title, price, convert(varchar(30), total_sales) 2> into scratchdb..titletab 3> from pubs2..titles group by title_id, convert(varchar(30), total_sales) 4> go (18 rows affected) This creates a table with four columns, one with no name. Attempts to use the null name fail. 1> use scratchdb 2> go 1> create clustered index x on titletab (title_id, "") 2> with ignore_dup_row 3> go Msg 102, Level 15, State 1: Server `SYBASE', Line 1: Incorrect syntax near ` `. 1> select convert (varbinary, name) from syscolumns where id=object_id("ben") 2> go -------------------------------------------------- 0x636173656e756d 0x6c6f675f6964 0x74696d65696e NULL (4 rows affected) In order to get around this problem, you may use sp_rename in an intuitive way, to get rid of the NULL column name as follows: 1> sp_rename "titletab.", request_status 2> go Column name has been changed. return status = 0) "Too Many Open Files" on Solaris If you have a Sun_SVR4 (Solaris) SQL Server that reports the error "Too many open files" from a kernel subsystem, you will need to change the maximum number of file descriptors per process (NOFILES kernel parameter on most systems). There are two ways to reset this value: 1. Modify your RUNSERVER script as follows, depending on your shell: sh or ksh RUNSERVER script: ulimit -n ## csh RUNSERVER script: limit descriptors ## where ## = the new value for file descriptors 2. Run a program which calls setrlimit() to increase the maximum number of file descriptors and then invoke a process which requires a large number of fd's (file descriptors). Here are a sample program and makefile called set_nofiles.c to show you how to do this. 1. Build the executable by typing the command (assuming you named the makefule set_nofiles.mk): make -f set_nofiles.mk 2. Run the executable by giving it the name of any program to run along with its command line options, for example: set_nofiles foobar x. You can have it run startserver -fRUNSERVER or the dataserver program. You must run the ulimit/limit or makefule commands as root in order to set the maximum number of file descriptors > 1024. Note: This Procedure is Documented Under System-10 SAG Supplement for SunOS Release 5.x (SVR4) Page 4-1. ***************source for set_nofiles.c************************************** /* set_nofiles.c, set_nofiles, Customer Service group, 07/02/93 /* routine to increase the maximum number of file descriptors per process /* Copyright (c) 1993, Sybase, Inc., Burlington, MA 01760 /* All Rights Reserved /* /* TITLE: set_nofiles /* _/* START-HISTORY:_ /* /* 02 Jul 93 edit 0 - Lance Andersen. /* Initial coding. /* _/* END-HISTORY_ /* _/* START-DESCRIPTION:_ /* /* set_nofiles is a program which can be run to execute the RUNSERVER /* file (or any other executable) in order to increase in number of /* file descriptors available per process inorder to avoid the OS /* error EMFILE (Too many open files): /* To use this program: /* Build as follows: /* cc set_nofiles.c -o set_nofiles /* /* While logged in as root, set the following ownership and permissions: /* chmod u+s set_nofiles /* chown root set_nofiles /* /* To execute: /* set_nofiles command /* /* When set_nofile executes, it will set the max file descriptors to the /* value defined by the MAX_FD #define. The program will run under root /* ownership while file descriptors are being changed and then ownership /* will revert back to the user who invoked the program (in order to /* prevent security breaches). _/* END-DESCRIPTION_ /* _/* START-DESIGN:_ /* _/* END-DESIGN_ /* _/* START-FUTURES:_ /* _/* END-FUTURES_ /* _/* START-CODE:_ */ /* ********************************************************************** */ /* Define OS include files */ /* ********************************************************************** */ #include #include #include #include #include #include /* ********************************************************************** */ /* Define constants */ /* ********************************************************************** */ #define MAX_FD 3000 /* set max number of fd's per process (NOFILES) */ main(argc, argv) int argc; /* Number of arguments */ char **argv; /* arguments */ { struct rlimit rlim; /* soft & hard resource limits */ /* ****************************************************************** * / /* Set the maximum and current value for fd's per procesess (NOFILES) * / /* ****************************************************************** * / rlim.rlim_max= MAX_FD; /* hard limit */ rlim.rlim_cur= MAX_FD; /* soft limit */ if(setrlimit(RLIMIT_NOFILE, &rlim)==-1) { /* Oops, we failed, print error msg and OS error number */ fprintf(stderr, "OS error %d encountered while changing RLIMIT_NOFILE to %d\n", errno,MAX_FD); exit(-1); } /* ****************************************************************** * / /* reset the uid to the user who invoked the program so that the * / /* process does not run as root. * / /* ****************************************************************** * / setuid( getuid() ); /* ****************************************************************** * / /* Now execute our program passing required arguments * / /* ****************************************************************** * / if(argc >1) execv(*++argv, argv); else fprintf(stderr,"Warning, no argument passed to set_nofiles\n"); } /* END -- MAIN */ * Now the makefile ******* # set_nofiles.mk # # Makefile to create set_nofiles command # Note: to run the install portion, you must be logged in as root _DEFAULT_CFLAGS=_ _LIBRARIES= _ CFLAGS= -c $(DEFAULT_CFLAGS) $(EFFLAGS) _INCLUDE= _ OBJECT = set_nofiles.o BUILD_NAME=set_nofiles INSTALL_DIR=$(HOME)/bin OWNER=root # # Default rule to delete, build and install set_nofiles # all: clean build install # # Build the binary # build : $(OBJECT) $(CC) $(OBJECT) $(LIBRARIES) -o $(BUILD_NAME) # # build the binaries # $(OBJECT): $(OBJECT:.o=.c) $(CC) $(INCLUDE) $(CFLAGS) $< # # Remove all object files # clean: rm -f $(OBJECT) $(BUILD_NAME) # # install the product # install: cp $(BUILD_NAME) $(INSTALL_DIR)/$(BUILD_NAME); cd $(INSTALL_DIR); chown $(OWNER) $(BUILD_NAME); chmod u+s $(BUILD_NAME) _________________________________________________________________ Specifying TCP Port in System 10 for Solaris A customer installing Sybase 10.0 on Sun_SVR4 (Solaris) ran into an odd problem. With an original specification of TCP port 2000 for the SYBASE Server process, the installation failed. When the port number was changed to 2025, as shown in the example in the Install Guide, the installation worked fine. The manual suggests that any port not currently used between 1024 and 65535 can be specified -- what caused this failure? It appears that 2000 was listed in /etc/services. Round numbers like 2000, 6000, etc. are often used for network services. We recommend, therefore, that you avoid using round numbers for your port numbers. Solaris 2.3, Intimate Shared Memory, and Server EBFs Revisited Sun Support has been passing along information to our mutual customer base that either EBF 2592 or EBF 2594 are required, along with their patch 101318-35+, to prevent certain system panics. As a result, customers are calling Sybase Technical Support and requesting one of these EBFs. Further, the 10.0.1 Server Release Notes claim that our customers will need EBF 2594 and Sun's patch to prevent Sun OS panics while running SQL Server. This article will clarify the issue and make some important amendments to Sun Support's information. The majority of customers will not encounter a problem after installing only the Solaris patch. Most customers' Servers will boot and run just fine. Customers who have memory configured to a value near (or greater than) the physical memory on the machine may experience problems. Generally speaking, very few customers fall into this group, and those that do would probably be better off configuring memory to a level that Intimate Shared Memory can again be used and the old Server booted successfully (see Summary notes). First, a word on Sybase's EBFs. There will be no EBF 2592 for 4.9.2. The reason is that ISM use is not a feature of our 4.9.2 Server. The only way ISM can be used is if the system forces all user applications to use ISM. If that is the case, and the Sun patch has been applied, the Server simply will not boot. In that case, disable this Solaris kernel feature. This is not the default behavior, and will only be an issue for customers who have modified a system startup file to enable this feature. In a future 4.9.2 Rollup, we may add support for ISM use via this Solaris feature. However, since this is a product enhancement rather than a bug, there will be no one-off EBF. The EBF for 10.0.1 will be EBF 2917, not EBF 2594 as was reported in the 10.0.1 Release Notes. Here are answers to some common questions about this issue, followed by a summary. Common Questions and Answers Q: What is ISM and why is it important? 1. ISM (Intimate Shared Memory) is an enhancement in Solaris 2.X which allows multiple processes to share certain low level data structures in the Solaris Virtual Memory (VM) system. These data structures are not normally shared and are used for virtual-to-physical address translation. These translation mappings are loaded into the hardware Memory Management Unit (MMU) as required. Sharing these data structures means that less loading and unloading of the MMU will be required during a context switch. Depending on the number of engines which are running and the overall system load, not having to load and unload the MMU can have a considerable positive impact on performance. One other benefit of using ISM is that the virtual memory is actually locked into physical memory by the operating system. This means that the Server has true control over what is in memory when doing memory management. It also means that a user starting up an editor or compiler won't steal memory from the Server resulting in the operating system throwing the Server memory pages out to the system swap space. Again, this can have a considerable positive impact on Server performance. 2. I assumed that as long as the Server would start up (with the Sun patch installed), we would see no more panics, regardless of whether or not the EBF is installed. Is that correct? 3. If you have the Sun patch installed, this particular panic condition will not occur. User applications such as our Server aren't an issue. Our EBF is only required to allow for the change in the shmat() system call behavior to allow the Server to boot. 4. Do you foresee any major system performance issues involved with the _EBF?_ 5. No. There may be two extra shmat() system calls at Server boot time, the performance affect of which is negligible and only a one-time occurrence. Of course, the Server may not use ISM in some instances which will affect performance. For example, right now the OS is letting you use ISM even when it can't lock down the memory. It shouldn't do that because the system will later panic when it tries to unlock the pages. However, you do get the benefits of ISM at the cost of system panics. With Sun's fix, you won't be using ISM as the system will be able to detect that it can't lock down the memory. This could have a very visible impact on performance. It is preferable to configure the Server to use an amount of memory which will allow for ISM use. No Server EBF is required in this case. 6. We have received and installed a patch from Sun 101318-35+ which is supposed to fix Sun's half of the problem. The engineer with whom I spoke said that the Sun patch would not stop the panics. He said that only applying a Sybase EBF or changing an ISM configuration parameter would stop panics. 7. This is incorrect. No user application can panic an OS. All our EBF will do is to allow the Server to boot.The ISM parameter the engineer referred to probably simply turns off ISM use altogether. 8. The Sun engineer said that the ISM configuration change would slow Sybase performance appreciably and that another customer experiencing the same bug had chosen not to make the change. 9. The only performance implication is that the Server won't use ISM if the Solaris kernel cannot lock down the memory as it needs to in order to prevent a later system panic. It's a simple question of resources. SQL Server 10.0 can use an OS feature such as ISM if the OS can supply the resource. Whether or not the OS can supply the resource depends on how the Server is configured. That is, you can't configure the Server to use 256MB of memory on a machine with only 128MB of physical memory and expect it to use ISM. If the Server won't start after installing the Sun fix with a given memory configuration value, then decide whether you want to decrease the memory to the point at which it will still use ISM, or if you want to skip the use of ISM altogether. If you choose the first option, you can use the stock Server and Sun's patch and your system shouldn't panic unless there are other Solaris bugs showing up. If you choose the second option, then you probably need EBF 2594 (or 2917, depending on your Server version). You should certainly decide whether you want to use ISM or not and configure memory usage accordingly. Sybase Technical Support recommends configuring memory to a point where ISM can be used so that the Server truly has control of paging activity. Summary In all cases, customers should install version 35 or later of Sun patch 101318. If you are running version 4.9.2 of SQL Server, you must not force ISM use through the Solaris kernel. If you are running a System 10 Server and want to use ISM, you must configure the Server to use an amount of memory that the Solaris kernel can lock into physical memory. No EBF is needed in this case. If you would rather skip ISM use and want to configure the Server memory to a value too large to be locked down by the Solaris kernel, then you'll need EBF 2594 or 2917. Generally speaking, it is preferable to configure the Server to use ISM memory rather than running either EBF. If the non-EBF Server will notÅ boot after installing the Sun patch, the sp_configure value for memory is set too high. To allow for ISM use and get the Server to boot, configure the Server to use memory equal to 90 percent of the physical memory in the machine. If the Server still will not boot, decrease the requested memory in 10 percent increments until the Server will boot. Important Differences in Interfaces File, TLI vs.Sockets Customers who receive the Release Bulletin for Solaris 2.x (SunOS 5.x), under the section 2.4 "Installation Compatibility" find the following information: 2.4.3. Migration from SunOS 4.x(BSD) to SunOS 5.x(SVR4) Avoid sharing interfaces files among servers and clients running on different operating systems. If your SQL Server and all of the machines on your network formerly ran on SunOS 4.x, the migration of some of the machines to run on SunOS 5.x creates a heterogeneous network environment. The executable files created on SunOS 4.x will not run on SunOS 5.x, and file locations and formats will be different. When you create an interfaces file entry for a SQL Server running on SunOS 5.x, the interfaces file is written in TLI format, and any existing entries in the SQL Server's interfaces file are converted to this format. This article will explain some of the implications of this Release Bulletin entry. On Sun4 the interfaces file entries look like this: # _SQLSRV_SUN4_ query tcp sun-ether barracuda 3060 > tcp sun-ether barracuda 3060 console tcp sun-ether barracuda 3061 On Solaris, they look like this: # _SQLSRV_SOL_ query tli tcp /dev/tcp \x00020bf482d6dcb8 master tli tcp /dev/tcp \x00020bf482d6dcb8 console tli tcp /dev/tcp \x00020bf582d6dcb8 The hex translates as follows: \x | xxxxxxxx | xxxxxxxx | 0000000000000000 | port # | network node address | trailing zeros (optional) In the above example, then, the port number would be 8383 (8384 for console) and the network node address would be 193.749.884.72. The conversion from "old" format to "new" or TLI format is done automatically by sybinit. It even saves a copy of the old file. Bear in mind that sybinit converts all the entries in the interfaces file, so don't try to share the interfaces file between the "old" and the "new" formats by making two entries in the file. The solution is to have separate interfaces files as indicated below. This is likely to be an issue only for customers that currently have a homogeneous environment of all SunOS 4 (Solaris 1) when they upgrade some hosts to SunOS 5 (Solaris 2). This upgrade will give such customers a heterogeneous network environment. In other words, the executable files from SunOS 4 will not run on SunOS 5, the locations of files probably will be different, and the file formats probably will also be different. You should be aware of this regardless of applications you plan to run. This may also be noticed more from the Client side than from the Server side. For the Server side it should not be a problem. The installation and conversion of interfaces file from SunOS 4 to SunOS 5 will be handled by sybinit. The new format for specifying host addresses for TLI is common on SVR4-based systems. If you have upgraded a host to SunOS 5, you will already be familiar with this method for specifying addresses. If client workstations are not being upgraded to SunOS 5 at the same time, client workstations must continue to execute SunOS 4 binaries and also use the old-style interfaces file. When client workstations are upgraded to SunOS 5, they will need to use the new-style interfaces file along with their new binary executables. Backup Server: Load from a Single Device of a Striped Dump Backup Server customers may wonder if it is possible to do a striped dump but then load from a single device. The answer is "yes", but only when the dump is to tape or floppy devices and those devices are local. Specific steps to do this are: 1. Install the first tape to load and execute: load database from `/dev/nrst0' 2. Follow the prompts when Backup Server asks for subsequent mounts, issuing sp_volchanged as appropriate. sp_volchanged must be issued from a separate server login than the one doing the dump or load. SQL Server is not set up to pause in the middle of command execution, so the sp_volchanged command (which is a separate command) must be issued by a separate server session. Stripes may be loaded in any order, but if any stripe spans volumes, the volumes of that stripe must be loaded in sequential order. Backup Server Dump Tracking and Naming Strategies Different sites use different strategies for tracking dumps on a particular piece of media. One perfectly valid way is to put known labels on a group of tapes, and recycle them over time. If you do full dumps once a week, with transaction dumps in between, you could call them "xxx.monday", "xxx.tuesday", etc. How would you choose a particular dump to load? Use the file=name qualifier on the load command. Even if you choose not to name files when dumping, Backup Server prints the name of the file it mounts at each dump. Customers can (and should) write them down for future reference. You know which file to specify for the load because you have a convention for naming files, and/or wrote down the name that Backup Server printed. Here is a possible method: 1. On dump, input the following: dump database foo to `/dev/nrst0' file='foo.dump' 2. Input the following to load: load database foo from `/dev/nrst0' file='foo.dump' Subsequent transaction files will be "foo.x.1", "foo.x.2", etc. (where the "x" stands for "xact"). The Dangers of dump transaction with no_log In the Commands Reference Manual, under dump transaction with no_log, there is a warning message stating that you should only use this command as a last resort. But what exactly does "last resort" mean? What happens when you use the command? What should you use instead? And finally, if this command is so bad, why does Sybase provide it? Sybase Technical Support recommends that you dump your transaction log regularly. You must determine the dump schedule by the amount of logged activity your databases get, and how large your databases are. Some sites dump transaction monthly; some sites dump transaction nightly. _NOTE!_ If you are running SQL Server 10.0, you may use sp_thresholdaction to dump tran automatically before space becomes critical; additionally, the Backup Server now ensures that tasks will not hang while dumps are in progress. Please see your SQL Server Reference Manual for more details. The remainder of this article applies only to sites running pre-System 10 versions of SQL Server. If you never dump transaction, the transaction log eventually fills up. SQL Server uses the log for recovery purposes. When the log fills, the server stops allowing transactions to go through, because it can no longer write to the log, and the server cannot recover unrecorded transactions. Inserts, updates and deletes hang. At this point, you can't even run most of the dump tran commands, because SQL Server logs these as well! This is the "last resort" situation that dump transaction with no_log was designed for. This command removes the inactive portion of the log, freeing up log space and letting database changes proceed, by writing no log records itself. This is why no_log runs when other dump tran commands can't. But think for a moment about the environment in which dump transaction with no_log is designed to run. All the changes being made to the database are hung. No transactions are occurring. So dump transaction with no_log does no concurrency checking. If you use dump transaction with no_log while changes are being made to the database, you run the risk of corrupting the database. Most often, these are reflected as 813 or 605 errors. To remove the inactive portion of the transaction log while changes are being made to the database, use dump transaction with truncate_only. This command is written to the transaction log, and it also does the necessary concurrency checking. Both of these commands have warnings associated with them, which can be found in the Commands Reference Manual. Be sure that you understand the warnings and the instructions that accompany them before you use either command. Sybase provides dump transaction with no_log to handle a very specific emergency situation. To help ensure the integrity of your database, you should use it only as a "last resort". Hiding the isql sa Password from a ps Command How do you run isql scripts without having to specify the -Ppasswd option on the command line? Because the command line is visible from the ps command, running isql -Ppasswd is a security breach. In newer versions of SQL Server, we delete the password from the command line by copying it into an internal buffer and setting the password in argv to all blanks. Unfortunately, this technique does not work for all versions of UNIX; some versions of UNIX give the program a copy of argv, but display the original version in the ps output. Even on Sun, our method of blanking out the password on the command line does not provide security. If one uses the -c option with ps, it will print the original command line, not the copy that was given to the program. ps -e will show the password if it has been saved as an environment variable. There are two possible solutions to this problem, one for batched and another for non-batched sessions, both requiring that the isql password be saved in an external file. Method 1: Non-batched Sessions: 1. Save the password in a file that is readable only by the owner, as follows (you need type this one time only, from the command line): cat > scratchfile mypassword ^D chmod og-rwx,u+rw scratchfile 2. Run the session from the command line, or as a shell script: isql -Uusername < select au_lname from authors 2> where au_lname = "Ringer" 3> go STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Index : aunmind 1> select au_lname from authors 2> go STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Table Scan Since 4.9.2, even a simple SELECT FROM table will use the index if the SELECT is from an indexed column, as in this example from a production 4.9.2 version of SQL Server: 1> select au_lname from authors 2> go STEP 1 The type of query is SELECT FROM TABLE authors Nested iteration Index : aunmind 4.9.1 SQL Server EBFs above 1690, that bring the Server to the 4.9.2 level, also have this feature. If there is more than one index that covers a query, the smallest will be used. Also, since a non-clustered index has at the bottom of its tree one index row for each data row, the index will also be used for a simple SELECT COUNT(*) FROM enormous_table query. How to Generate Sequential Keys for Table Key Columns Many customers wish to generate sequential, unique keys for a key column in a particular table. The objective is simple, to be sure each key is unique. This implies the following: * The key must be updated by each user accessing it. * The table containing the key must be locked during update, so that no duplicates may occur. In System 10, you can both create new tables with the new IDENTITY column, and add the IDENTITY column to existing tables. The next issue of this newsletter will explain this new column in more detail. For Server releases before System 10, there are a number of possible methods for generating sequential keys; this article will illustrate only one of the possible options. The basic idea here is to use one table as the storage area for the sequential key as it is generated and modified. This key can then be selected into a key column in the appropriate table. For example, first create the storage table, which will contain one column and one row: 1> create table KeyStorage (NextKey int) 2> go 1> insert KeyStorage values (0) 2> go (1 row affected) Here is a sample table that will be the eventual destination of the keys. 1> create table foo (key int, text char(5)) 2> go First, we update the stored key, conveniently locking it against other users in the process: 1> begin tran 2> go 1> update KeyStorage set NextKey = NextKey + 1 2> go (1 row affected) A simple check shows the table is now locked (this list has been edited slightly to save space -- many other locks may exist): 1> sp_lock 2> go spid locktype table_id page dbname _____________________________________________________________ 1 Ex_page 5 894 master ... 1 Ex_table 640005311 0 foo (26 rows affected, return status = 0) 1> select object_name(640005311) 2> go _____________________________________________________________ KeyStorage (1 row affected) Now you can select the new key safely, and insert it into the target table: 1> declare @newkey int 2> select @newkey = NextKey from KeyStorage 3> insert foo values (@newkey, `test') 4> go (1 row affected) Remember to commit tran! You can test to be sure the insert worked: 1> select * from foo 2> go key text ----------- ----- 1 test (1 row affected) You can generate the new key by stored procedure, thus giving at least reasonable assurance that the key continues to be incremented by one. Put the begin transaction statement in the stored procedure -- SQL Server will remind you that you must commit the transaction "by hand": 1> create procedure get_newkey as 2> begin tran 3> update KeyStorage set NextKey = NextKey + 1 4> go 1> exec get_newkey 2> go Msg 266, Level 16, State 1: Server `REL492_SUN4', Line 1: Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRAN is missing. Previous count = 0, Current count = 1. (return status = 0) 1> declare @newkey int 2> select @newkey = NextKey from KeyStorage 3> insert foo values (@newkey, `test2') 4> go (1 row affected) (1 row affected) 1> commit tran 2> go 1> select * from foo 2> go key text ----------- ----- 1 test 2 test2 (2 rows affected) Other portions of this method may be automated as desired by the more meticulous DBA, but the basic method remains simple and straightforward. Timestamps Will Roll Over Tech Support customers asked us, "Is the value of a timestamp guaranteed to be monotonically increasing within a database?" These customers wanted to use a timestamp field to identify rows that have changed since the last time they looked, with a where clause like where timestamp > @previous_timestamp. The wanted just to add a timestamp field to their existing tables, so that SQL Server would maintain the values, and they wouldn't have to change any of their own code to identify newly inserted/updated rows. (Deleted rows would need to be treated differently.) The answer is that the timestamp is not guaranteed to increase indefinitely. A timestamp is a 56-bit integer that will eventually roll over, though it takes a very long time to do so. The only numbers that won't roll over are ones with unbounded storage; there is no such datatype in SQL Server. However, Sybase does guarantee that if the row has changed, the timestamp will differ from the one in the cached copy of the row. SQL Monitor Client/Server and SQL Server Compatibility The following table is a compatibility matrix for various SQL Monitor Client, SQL Monitor Server and SYBASE SQL Server versions. SQL Monitor Client | SQL Monitor Server | SYBASE SQL Server UNIX -- sqlmon | | PC -- servmon.exe | monserver | dataserver ================================================================= any UNIX version 10.0.1 | 4.9.1 | 4.9.1 or 4.9.2 ----------------------------------------------------------------- any UNIX version 10.0.2* | 4.9.1 | 4.9.1 or 4.9.2 ----------------------------------------------------------------- PC version 10.1.0 | 4.9.1 | 4.9.1 or 4.9.2 ----------------------------------------------------------------- PC version 10.1.2* | 4.9.1 | 4.9.1 or 4.9.2 ----------------------------------------------------------------- any UNIX version 10.0.1, | 10.0.0 | 4.9.1, 4.9.2 or 10.0.0 10.0.2 (monserver 10.0.0 can monitor a 4.9.x SQL Server but must have a SQL Server 10.0.0 directory installed) ----------------------------------------------------------------- PC version 10.1.0 | 10.0.0 for Sun4, | 4.9.1, 4.9.2, or 10.0.0 | Solaris, NCR, AIX | (this row not certified but can connect) (monserver 10.0.0 can monitor a 4.9.x SQL Server but must have a SQL Server 10.0.0 directory installed) _________________________________________________________________ PC version 10.1.0, 10.1.2| 10.0.0 EBF 2714, | 4.9.1., 4.9.2, or 10.0.0 | HP only | (this client will now connect) | ----------------------------------------------------------------- PC version 10.1.2* | 10.0.0 for Sun4, | 4.9.1, 4.9.2, or 10.0.0 | Solaris, NCR, AIX | ----------------------------------------------------------------- any UNIX version 10.0.1, | 10.0.1* | 10.0.1* 10.0.2 | | ----------------------------------------------------------------- PC version 10.1.2 | 10.0.1* | 10.0.1* * = available Q2, '94 The current PC SQL Monitor Client is not certified against any 10.0.0 SQL Monitor Servers. There are pieces of data that are unmonitorable with this configuration, for example the Device I/O and Network Traffic graphs in the Performance Summary and Performance Trends windows will display `0'. The PC SQL Monitor Client 10.1.2 (due in May 1994) addresses this and will be certified against 10.0.0 Monitor Servers. The version of Monitor Server should go hand in hand with the version of SQL Server. If you need to monitor a 10.0.1 SQL Server, get a 10.0.1 Monitor Server; use a 4.9.1 Monitor Server to monitor a 4.9.x SQL Server. * HP Monitor Server 10.0.0 EBF 2714 enables PC clients to connect successfully to it. * Solaris Monitor Server 10.0.0 EBF 2715 enables re-starting of the Monitor Server while remote client connections are still connected to it. * AIX Monitor Server 4.9.1 EBF 2496 corrects a problem resulting in 100% CPU usage. * HP Monitor Client 10.0.1 EBF 2507 eliminates dependency on libC.sl. The following table shows combinations that will not work, nor are they intended to: SQL Monitor Server | SYBASE SQL Server _________________________________________________________________ 4.9.1 | 10.0.0 (this combination may work, but is not certified) _________________________________________________________________ 4.9.1 | 10.0.1 _________________________________________________________________ 10.0.1 | 10.0.1 _________________________________________________________________ 10.0.1 | 4.9.x or 10.0.0 SQL Monitor Memory Allocation SQL Monitor Client, upon initial start-up, may cause a noticeable strain on heavily loaded SQL Servers or SQL Servers configured for large size. The strain is due to the client trying to construct the Memory Allocation Chart that occurs by default on UNIX clients and upon request on PC clients. New connections to the SQL Server may hang, currently running tasks may temporarily hang, and the SQL Monitor Server may automatically shut down, on the assumption that the SQL Server has also shut down. See the following information to prevent this automatic shutdown. If this strain is noticeable and unacceptable, then request that the Memory Allocation Chart not be created. * For UNIX clients: invoke sqlmon with the -nomem flag on the command line. * For PC clients: do not select the Obtain memory allocation check box option. Additional Command Line Options for UNIX & VMS To prevent SQL Monitor Server from automatically shutting down, the time interval that the SQL Monitor Server checks to see if the SQL Server is active needs to be increased. The default configuration is 120 seconds. To change the interval, you must use an undocumented monserver command line parameter (which is supported, and will be documented in upcoming releases of the manual). * For UNIX, the parameter is -L, where is a file in the current directory that contains only this line: heartbeat_interval nn where nn is a value in seconds. * For VMS, the parameter is /localconfig= where would have the same line in it: heartbeat_interval nn For example, to start up a SQL Monitor Server that will poll every 10 minutes for the presence of its SQL Server, the UNIX command is: monserver -Usa -Ppassword -MMON_1001 -SSYB_1001 -n5 -0 \ -Lmonserver_config_file & where monserver_config_file has the following line in it: heartbeat_interval 600 One drawback to using a large time interval is that when the SQL Server actually does shut down, the SQL Monitor Server will remain running longer, and will be holding onto the SQL Server's shared memory segment. You may need to shutdown SQL Monitor Server manually in order to start SQL Server again. Net-Gateway Security Administrators considering the issue of Net-Gateway Security must ask themselves two questions: First, do I want to pass some flavor of userid and/or password to the Mainframe? And second, if so, which userid and password should be passed? First of all, you can pass a userid and/or password to the mainframe over Net Gateway via the sgw_addrpc stored procedure: exec sgw_addrpc rpc_name, tran_id, remote_lu, security The security parameter has one of these three values: * none -- no userid or password will be passed to CICS. * userid -- userid only will be passed to CICS. NOTE! If you specify userid and you are running the Net-Gateway on OS/2, be aware of the interaction with OS/2's Communication Manager. Examine the NFG file (in the \cm\appn subdirectory) that describes your network to the Communication Manager; check the DEFINE_PARTNER_LU entry corresponding to your host and be sure that the configuration allows CONV_SECURITY_VERIFICATION with the right remote LU. o both -- both userid and password will be passed to CICS. So, having chosen userid or both under the security parameter, you are ready to decide which userid (and, optionally, which password) is to be passed. There are three possible choices, two associated with user login, and one with transaction grouping. Notice the syntax of the following procedures: exec sgw_addlog login, <-- source-1 pwd, <-- source-1 host_login, <-- source-2 host_pwd, <-- source-2 tran_group, con_group, gwctrl exec sgw_addtrngrp tran_group, group_login, <-- source-3 group_pwd, <-- source-3 langrpc, langpwdlevel exec sgw_addrpctogrp tran_group, rpc_name, rpcpwdlevel Method 1: Taking userid/pwd from "source-1" If the Net-Gateway is started with the -O flag ("security Override"), the identifying information passed to the mainframe will be taken from the login and pwd columns of the user's Net-Gateway login record. That is, the identification by which the user logged onto the Net-Gateway (or upstream SQL Server, if any) will be propagated to CICS. Notice that all discussion involving "transaction grouping" or "connection grouping" -- that is, the security enforcement machinery of Net-Gateway itself -- becomes irrelevant when the security override is in effect. All security enforcement is deferred downstream to the mainframe (and upstream to the SQL Server, if any). Method 2: Taking userid/pwd from "source-2" If the Net-Gateway is started without the security override flag, each user must be associated with a "Transaction Group" in order to execute any RPCs on the Net-Gateway. The transaction group is tied to the user via sgw_addlog. RPCs are tied to the transaction group via sgw_addrpctogrp. Specify user as the rpcpwdlevel when you add an RPC to the transaction group, if you wish to pass on the values in the host_login and host_pwd columns from the user's login record as userid and password. Method 3: Taking userid/pwd from "source-3" Specify "group" as the rpcpwdlevel when you add an RPC to the transaction group, if you wish to pass on the values in the host_login and host_pwd columns from the user's login record as userid and password. Special Case: The Language RPC The Language RPC is executed whenever a user request is not an explicit RPC request. none, user, or group must be specified as the langpwdlevel when the Language RPC is associated with a transaction group via the sgw_addtrngrp stored procedure. Omni SQL Server and RMS Locking Strategy It is important for customers to understand the manner in which Omni SQL Server accesses RMS files, since often Omni SQL Server is used in an environment where other processes are accessing the same files Omni SQL Server is expected to access. When Omni SQL Server requires access to an RMS file, Omni SQL Server must first determine which access and share modes to use before it can open the file. Two separate attempts are made by Omni SQL Server before an open failure is reported. The first attempt is performed with all RMS access and share modes enabled, as follows: access = FAB$M_SHRGET | FAB$M_SHRPUT | FAB$M_SHRDEL | FAB$M_SHRUPD; share = FAB$M_GET | FAB$M_PUT | FAB$M_UPD | FAB$M_DEL | FAB$M_TRN; If the file cannot be opened using these access modes, then a second attempt is performed with the following access and share modes enabled: access = FAB$M_GET; share = FAB$M_SHRGET; If this access mode fails, then the open attempt will fail and Omni SQL Server will not be able to use the file. (In the above statements, access corresponds to an element in an RMS data structure called the File Access Block, or FAB. The element is named fab$b_fac. share corresponds to the element named fab$b_shr in the FAB). Once a successful open request has been performed on a file, the access and share modes are cached in memory so that the second time the file is opened, Omni SQL Server will not have to establish them again. It should be noted that this has nothing to do with the configuration parameter read regardless. This parameter determines Omni SQL Server behavior after a file has been opened, and an Omni SQL Server thread is attempting to read a record from that file. If read regardless is enabled (set to `1'), then Omni SQL Server will be able to read a record even if it is locked for exclusive use by another VMS process. Please refer to the Omni SQL Server System Administration Guide, Chapter 6, for a discussion of this configuration parameter. Bugs Fixed in Latest Rollup for 4.9.2 SQL Server The following is a list of bugs which have been fixed in recent EBF Rollups for the 4.9.2 SQL Server. (The EBF# at the top of each list is for the Sun4 platform.) The current EBF Rollups available now are: Platform | Rollup Number _________________________________________________________________ Sun4 | 2825 HP 9000 Series 800 | 2826 IBM RS6000 | 2827 NCR SVR4 | 2828 VMS | 2829 Sun SVR4 | 2830 AXP VMS 1.5 | 2831 Previous Rollups, released since the previous issue of Sybase Technical Newsletter, are specified in the tables below and consist of: Platform | Rollup Number _________________________________________________________________ Sun4 | 2359, 2560 HP 9000 Series 800 | 2360, 2561 IBM RS6000 | 2361, 2561 NCR SVR4 | 2362, 2563 VMS | 2363, 2564 Sun SVR4 | 2364, 2565 AXP VMS 1.5 | 2365, 2566 This list is not inclusive; only the fixes new and specific to this release, and introduced since the last publication in Sybase Technical Newsletter, are included. Some bugs which do not visibly affect product behavior have been omitted from this list for clarity. This information is also available on Insight and the OpenLine Sybase Compuserve forum. Bug Fixes New to Rollup 2359-2365 Bug# Description 51270 `Process infected with 11' error can occur when using kill for a process which has a status of `send sleep'. 51269 Error: 5150, Severity:16, State:1, occurs during attempts to switch from a failed mirrored device to good secondary device. 50912 If a table contains at least 2 BIT-type columns and at least 1 NULL-able column, the SQL Server will fail to send the table's log information to the Replication Server. 50543 If a server process dies when it has created a worktable, and locks have not been acquired on the worktable yet, the server can generate a stack trace and shut down. 50263 RPC causes multiple IO's thru one socket. This was causing corruption of the TDS stream. 50111 Writetext fails with 1127/1129 errors. 50075 When using isnull(rtrim(""),"TEST") we just get the first letter "T" instead of "TEST". This problem has been reproduced on System 10 GA 50029 Any correlated subquery containing a min or a max aggregate in the inner subquery on a binary or char type fixed column will not return correct rows when doing a join using the same table in the outer and inner query. 49958 Error 804 occurs when a non-sa user who belongs to a group that has been granted CREATE DATABASE permission issues create database command. 49790 System stored procedure, sp_spaceused, returns an arithmetic overflow on large tables, for example: 600000 rows, 70 meg. 49734 Group by with lower on varchar column values returns an incorrect number of rows. 49295 Process infected with 11 error during null text update. The update statement will cause Segmentation violation 49294 If the table has text column and a unique nonclustered index, then the direct mode update to that text field will result in 644 error. 49177 A regression produced by bugfix 44623 broke `disk remirror'. 49082 Error 8402 when using a union and a view that references another view on another database. 49065 When run, sp_reportstats receives: "Msg 232, Level 16 State 11: server `server name', Procedure `sp_reportstats', Line 67: Arithmetic overflow error for type varchar, value=0.003090. Arithmetic overflow occurred." 49011 When a POLLNVAL error is received, the polling fd is not removed and this can result in continuous errors being sent to the errorlog for the same fd. 49005 Server does not prevent a raw mirror device from being used to mirror two primary devices. 49002 When a non-clustered index is chosen for a temp table, index name is not getting printed in the `showplan' output. 48964 Kill command does not work for tasks that are not sleeping under one of the valid conditions. The old semantics allowed these processes to get killed provided they woke up. 48882 @@rowcount does not have correct value for an update statement when the update involves a join, and @@rowcount is referenced upon entering an update trigger. The value is much higher than the actual number of rows updated. 48505 Frequent POLLERRs are reported in the errorlog. Error handling needs to be enhanced inorder to explain cause of error. 48273 Msg 611 "Attempt made to end a transaction that is idle or in the middle of an update" when an attempt to create a temp table (from within a stored procedure) aborts on an interrupt. Happens for alter table which involves temp table from within a sproc. 48169 After a LOAD TRANS is performed, the value of the generation ID of the log records sent to the Replication Server should be preserved and not reset to zero. 48034 When dropping and recreating the temp table, any store procedure using that table will not be able to find it during opt/compile time and thus using the default values for number of rows and columns. Bug: Range table must be updated to the new tmp_def_id. 47989 Certain queries take significantly longer to compile under 4.9.1/4.9.2 compared with production 4.8, as a result of many more join strategies being considered. 47984 Server can kill process or otherwise mishandle connection if cancel is received while sending data to client. 47909 Speed up communication with a remote server (at the expense of i ncreased network traffic) by setting the TCP_NODELAY flag. This fix is only applicable to operating systems using sockets, not TLI. 47826 Buildmaster -m now requires -s option also. If it is not given, installmaster cannot run because it cannot alter master db on master device. 47694 During weekly dump to disk file, dump database task hangs. No errors are reported. 47614 When the PROBE process is invoked from a task running on a single-engine server which is both participating in, and is the commit server for, a distributed transaction, then the server may go 100% CPU bound and all user activity on the server will hang. 47361 Server stacktraces on the qry 1)create table t(a float) 2)select str(avg(a)) from t. This because we are adding an implicit conversion node between two same type of nodes (FLT8). This happens for `real' datatype also. 47354 When 512 error is raised and a dynamic index has been built, the current process is infected with 11, errors 605/6103 are raised during the rollback in tempdb. Any further attempt to access the tempdb page listed in the 605 error will return an 813 error. 47300 After receiving an 823 error (IO error) on a disk read, 605 errors occur thereafter whenever the same page is accessed. 47298 Certain doubly-nested subqueries can result in the thread being infected. 47294 Grant/revoke fails with stack trace and kills process when the same user is included twice in the user list. e.g., grant execute on sproc to user1, user1. Server should be more friendly to user than killing him & dumping ugly stack to errorlog. 47129 noobhandler has a forever loop which can fail to terminate if ioctl doesn't return the expected value. We should also check for no data on the socket. 46896 Group by on 16 columns and distinct expression in the select list on 17th column causes error 415 to be raised 46629 Upgrading from 1225 to 1614, get more logical read on worktable for the same query. Applies to OR strategy when only one row qualifies. 46059 When a create index deadlocks, the descriptor information (doampg, first, root) may not be restored properly, leading to 605 errors. 44635 If a table exists with multiple bit columns, and an update trigger is tied to one of the bit columns, an update to any of the bit columns will cause the trigger to be fired. 44598 Getting Msg 806 and stack traces when doing "commit tran". 44129 Bugid 24506 has caused regression by raising 157 error even for standard queries 43990 If enough objects are created in a d.b. to cause the root page of index ncsysobjects to split, and then sp_dboption for autotrunc log is issued against that d.b., and the server is rebooted before checkpoint in d.b., dropping objects results in 644 errs. 43427 Getting Msg 1265 followed by Msg 804 and stack trace when trying to delete or update tables with non-clustered indexes. 42459 An application using two dbprocs concurrently can become blocked eventhough the two tasks hold compatible locks. sp_lock will show that the second task is blocked by a (compatible) lock held by the first task. 41886 Msg 8412 and stacktraces when using "create table" and "select for browse" statements in the same batch.. 37006 Message 5704 is getting printed even if the client and server character-sets are same and they don't need char-set conversion. This should be suppressed in those situations. 36524 Sysindexes First entry is not accurate. 33871 smalldatetime does not accept format etc. that datetime does, in an implicit convert. gets syntax error. 32183 "the statement ""create clustered index index_name on table (column) with sorted_data, allow_dup_row"" generates error 1508 and does not create the index when a duplicate row is encountered" 31915 A select query with union having a compute list at the end gives Msg 411. 27640 Queries containing correlated subqueries other that IN or ANY that appear under an OR get incorrect results if there are no matching rows from the outer query. 27452 SELECT INTO from a user table to a temporary table that converts a character column datatype to a tinyint datatype results in an "intn - length 1" datatype which is incorrect. It should be a tinyint. 17271 An aggregate query that does an `exists' subquery will sometimes return the wrong answer if there are duplicates. 13495 In many cases, we process subqueries as joins where it gives the wrong answer. Existence checks should not be done as joins - they return dups where they should not, and the construction OR x in (select a from b) will not work when b is empty. Bug Fixes New to Rollup 2560-66 Bug# Description 53601 I/O errors can lead to infected processes or erroneous 821 errors. 53118 Allow for load to continue with a warning message if the dump was done prior to bugfix #52181, stop the load otherwise. 52943 Modification to bugfix #51454 to handle str(-0.483700,10,2) correctly on hp800, ncr_svr4, avms & vms platforms. 52850 Recompiling a stored procedure which creates a table results in a 3703 error with stack trace. 52530 On tli based platforms, clients can hang after receipt of a POLLERR, POLLNVAL or POLLHUP on any connected socket. It is also possible that no new logins will be accepted. 52447 Update Message #277 to reflect bugfix 19418. 52406 During OAM cleanup, pages being unlinked and deallocated are not unkept resulting in 803 errors, stack traces & the need for a server reboot. 52332 gets message kernel: nrpacket: t_rcv(). no message currently available. The site handler hangs and no further rpcs can be made. 52181 4308 errors(State: 4), intermittently occur when doing a load transaction or load database. This can only happen if the dump was made when the server was very active. 52179 A server panic (ulowncheck) occurs if a load transaction is done while running the diagserver. 52060 When recompiling a system stored procedure while in a user database, the sysdepends table in the user database gets updated instead of the sysdepends table in the master database. 51959 Site handler tasks may hang under heavy RPC load between Open Server and SQL Server. This can result in all logical connections hanging. 51946 Modification to bugfix #51454 to handle str(float) correctly. 51893 If the SQL/LTI thread has reached the end of the log and is sleeping waiting for more log records to be written, a timeslice error will occur if the SQL/LTI client receives a network attention. 51892 The VMS implimentation of the SQL Server's Replication Server Support Facility (RSSF) does not correctly set the internal SQL Server bitmap which indicates to the RSSF which log records should be sent to the Replication Server. 51804 Correct stack trace produced by calling a stored procedure which performs a select from a view based on a database in another database. 51725 A space in an image column of a `bcp' file causes a NULL to be inserted into the table. Subsequent deletion of row(s) from that table will result in 605 errors. The deletion operation should be done before the update to the text/image column. 51701 The process that holds the update page lock also holds an ex_page lock. This is sets up a deadlock case. 51595 Modifications to datetime.h to include some ot the new defines. Required for bugfix #51191. 51508 If the server process dies when executing a query with an OR that creates a worktable, a stack trace occurs and the server shuts down. 51454 The str() function has been re-written to give percission of upto 6 decimal places. 51240 Return status from optlookip() was not correctly checked. As a result, the parser was considering anything after the constant token as an option. 51215 In rollup 1790, certain kinds of aggregate subqueries, with one or more views, may generate a different access plan. Typically, table scans are generated inspite of a useful index being present. Performance of such queries can be very slow. 51191 Modify millisecond arithmetic from 1/300th of a second precision to 1/1000th of a second. 51174 SQL Server error 513 can occur upon Domain rule checking. This will Only occur if there is an implicit server datatype convertion between the datatype to insert and the datatype for the column referenced within the table. 51072 Timeslice errors will occansionaly occur while compiling and executing queries. A typical stack trace will have s_mustrecompile in the stack frame. 50911 Unbalanced transactions occur in the syslogs table at times when a stored procedure is recompiled. This can cause the replication of the database via the replication server to fail. 50786 When estimating the cost for join density, we want to exclude NULL values if any keys in the set is NULL. This will make the selectivity much smaller for a table that has a lot of NULL key-rows and thus optimizer will use index instead of table scan. 50783 Correct several 821 & 605 error situations by saving page hdr before calling bufpagedestroy(). 50781 Errors 226, 277 & 266 have been reported when running a stored procedure that calls another one which has been dropped then recreated. 50499 Initialize the "victim" when using trace LOCKM,4. 49653 Network checking delays can occur on heavily used servers even if the platform supports SIGPOLL notification. 49505 Fix to do proper `round'ing when the number of integral digis in numeric_expr is equal to the negative of the number specified in the integral_expr (eg. round(5550.00, -3)). 49010 REUSEADDR should not be set in nopen(), as we are not specifying the local address for the new endpoint; it is being chosen for us by the network provider. 49009 The event handling code after the t_accept() in nopen() is flawed. It doesn't recognize when the currently active connection request has disconnected and can thus result in issuance of a t_accept which won't be successfull. 49008 The event handling code after the t_listen() in nopen() is flawed. It incorrectly assumes that a disconnect event will clear the currently alloc'd T_CALL struct which results in another t_alloc, wasting the initial con_req and T_CALL struct. 49007 The errors reported within nopen() are not clear and cause alarm at customer sites without lending insight into the problem. 48886 When processing connection requests, no further polling should occur on a given listener endpoint until all current connection requests have been satisfied. 48197 When the server multiplies a negative floating point number to a floating point zero, it evaluates to a negative floating point zero and stores it in the row. Subsequently, all qualifier with positive or negative floating point zero fails to select row 47615 A large sproc contains a syntax error towards the end (missing , on 2nd to last field in insert statement) isql fails to report any errors and does not create the sproc 47301 The linger option is not set on client sockets resulting in loss of data if the server closes the connection before the client has finished receiving sent data. 44329 With a very heavy update load on the system, it is possible for alll processes accessing a particular database to hang on disk i/o. This may occur more frequently if the DB has the `trunc. log on chkpt' option turned on. 43595 Certain selects with subqueries can result in infected processes. 39688 Correct 203 error (sysdatabases not found) when `ins_syn_sql is run. 36963 If `select *' is present in both aggregate subqueries participating in an OR clause, the we seem to combine both of them together even if they reference different tables. This produces incorrect results in some situations. Bug Fixes New to Rollup 2825-31 Bug# Description 54191 Temporary tables are not automatically dropped when a stored procedure returns. This may cause 2714 errors. 54126 When the SQL Server re-uses some structures it is possible that Msg. 7134 errors may occur. 53896 The upgrade application needs to be modified to not perform the upgrade of the `upgrade version' to `493' until the actual maintenance modifications are made. 53699 Added missing or modified error messages to upgrade.c as reported in bug 52666. Included: 305,595,596,2778,2780,6108,6227,8010,7719, 17965,17966. Also verified 9119,9139,9141,9123,9138,9137,4403. 53603 Errorlog gets "kernel: nrpacket: t_rcv(), No message currently available" 53563 [REPSRV] Add comments to `include/trace.h' which document existing LTUTIL trace flags. 53494 This problem is observed for triggers getting renormalized immediately after load database which results in the updating of sysprocedures being done in the same user transaction 53492 Cannot print the stack of a sleeping process. 53356 SQL queries using text/image column in a where clause which is part of an OR clause can cause 7134 or 804 errors. Example: select id from table1,table2 where txtcol not like "% " or intcol not null. 53234 The SQL Server unexpectedly unreserves the database's LTCONTEXT while the LTM is still actively scanning the database's log. A 9120 SQL Server error will be generated causing the LTM to shutdown. 53163 select col1, diff1 = sum(datediff(dd,col2,@var1)), diff2 = sum(datediff(dd,col2,@var2)) from tab group by col1" produces the same value for both of the sums when run from a stored procedure. Also sometimes fails for adhoc queries. 53068 Certain XACTs generate duplicate key errors when applied to the replicant SQL Server (via the Replication Server) even though the same XACT did not generate duplicate key errors when originally applied to the primary SQL Server. 53053 Exceptions generated while within the RPC relay code could cause a signal 10 in the user process. This was most noticeable in the handling of the attention signals (CTRL/C). 52870 os_create_keyfile should have enhanced error messages to better explain what "Segment {segmentname} is in use" means. It should better reflect a conflict that could involve removing these files if the server is not currently running due to an 52705 In file instmsgs.ebf, for message 6227 there is a typo in line which drops message from german catalog. It has been written as 6627. 52704 When a st.proc references a view, and the view/underlying table is dropped and recreated, executing the st.proc can result in wrong columns being returned. 52695 Server crashes due to stack corruption and stack guardword corrupted message while trying to do select into a table from a view whose underlying table is dropped and recreated with different schema. 52666 Provide new 277 error message text for upgrade. 52651 The SIGURG & SIGIO is set for all processes within the same process group instead of just the server process. 52629 During update of a text col if error 7105 is raised we print "TEXT/IMAGE page 0 does not have a next page, although it should" based on pnextpg field. We should rather print the page # which has the problem (broken text chain). 52371 Attention packets received while processing multiple connection requests can lead to the scheduler process becoming infected. Stack trace output shows signal 11 occuring in nmskget called from nrecvattn. 52268 insert into table1 select string from table2 order by table2.column inserts bogus characters into table1 when there is a clustered index on table2.column 52199 Improper synchronization of cancel processing with outstanding I/O's could result in channel closure before I/O completion. 52190 1129 errors may occur due to a MP window in page allocation. 52129 Assuming procedure proc1 call proc2 and view view1 is built on view2. when ownership chain is not broken, explicit revokation on proc2 will raise error 229 when user executes proc1, while after explicit revokation on view2 user can still select on view1. 52059 Data translation for type float between certain platforms (depending on byte-swapping) is done incorrectly and gives unexpected results. 52030 Dumping database to disk can be aborted with "dbsspacket, write interrupted system call" message. 51785 When using a order by or a group by clause that creates a work table and the query has an assignment to a local variable the server crashed and kills the session. 51712 MP Configurations use clock interrupt to periodically wakeup secondary engines to find work. However, since clock interrupts are tied to actual CPU time consumed, secondary CPU's which are idle are not woken up as often as required. 51389 When a smalldatetime is used in a rule, the rule may work incorrectly, by either reporting a rule violation when there should not be, or not reporting when there should be. 51287 Incorrect arguments are passed to ex_callprint routine to report error 7220 "Site `%s' not found in interfaces file." resulting in cryptic message "Error: 72, Severity: 20, State: 11". Error 502 must have the same problem. 47467 With an empty table that has an index on a char column, if the sort order is changed on the server and then dbcc reindex is run, an error is received on the subsequent select indicating that the index is suspect. If table has a row then all is fine 47049 Share memory segment sizes on IBM AIX 3.2 are limited to 256MB, this limits the maximum size of memory that Sybase can use on a machine. 45962 If an insert or update query that has a subquery which can return null, the query inserts or updates columns that are not supposed to have null. 41730 "kernel: nspacket: send, Invalid argument" messages in the errorlog. 34889 821 error is not useful, and renders server unusable. 821 often occurs after 605 errors and prevents other user processes from obtaining a free page. 26285 Trigger attempts to query inserted/deleted table for textdata using a builtin "datalength" leading to a 605 error. This is no corruption but the user connection gets disconnected. 22150 Correlated subqueries returning count(*) do not return values where the count is zero. Q5.2: TECHNICAL NEWS Volume 3, Number 3 August, 1994 _________________________________________________________________ Disclaimer: No express or implied warranty is made by SYBASE or its subsidiaries with regard to any recommendations or information presented in SYBASE Technical News. SYBASE and its subsidiaries hereby disclaim any and all such warranties, including without limitation any implied warranty of merchantability of fitness for a particular purpose. In no event will SYBASE or its subsidiaries be liable for damages of any kind resulting from use of any recommendations or information provided herein, including without limitation loss of profits, loss or inaccuracy of data, or indirect special incidental or consequential damages. Each user assumes the entire risk of acting on or utilizing any item herein including the entire cost of all necessary remedies. Staff Principal Editor: Leigh Ann Hussey Contributing Writers: Donna Sams-Nyirendah, Greg Klinder, Ray Rankins, Aimee Grimes, Marc Sugiyama, Cris Gutierrez, Bob Perry, John Blair, Guy Moore, Danielle Scherer, Marian Macartney, Gary Sturgeon, Bret Halford, Sybase Engineering Send comments and suggestions to: SYBASE Technical News 6475 Christie Avenue Emeryville, CA 94608 This issue of the SYBASE Technical News contains new information about your SYBASE software. If needed, duplicate this newsletter and distribute it to others in your organization. Keep this newsletter with your SYBASE Troubleshooting Guides. _IN THIS ISSUE:_ Tech Support News/Features Communicating with TechNews Staff How to get SYBASE Technical News Customer Satisfaction Survey Server Avoiding Server Catastrophe New Syntax for RAISERROR Changing the Sort Order Manually sysdevices `status' Control Bits USE DATABASE Hanging After Upgrade Float / Money Display Methods Disk Mirroring Clarification RPCs in Threshold Proceduress Backup Server Context Allocation Errors TLI Address Translation Addendum OpenVMS Mount from CD-ROM Sybase Compatibility with RAID Connectivity / Tools / PC BCP Programming and Paging SQL Monitor Server and ceventbuf SQL Monitor Cheat Sheet SQL Monitor Incompatibility Table Corrections to NetWare Installation Manual Corrections to NetWare "Zombie" Article Changes in PC Open Client/C Packaging Bug Reports Bug 54192/34245 - Optimizer Bug 55721 - sp_estspace Bug 56619 - sp_columns Communicating with Technical News Staff There is now a mail alias through which both customers and Sybase employees may contact the Sybase Technical News team. Send mail to technews@sybase.com to make suggestions or to submit articles. At present, due to legal issues, we are only accepting article submissions from Sybase employees; however, we welcome ideas for articles from everyone. The tsg alias, announced in Volume 3, Number 2 of SYBASE Technical News, is a similar channel set up for the Troubleshooting Guide. Customers and employees can comment on the Troubleshooting Guide with corrections, additions, and other input. As with the tsg alias, technews is not a place to mail questions that would better be asked of Technical Support. Please call 1-800-8-SYBASE to contact Technical Support. How to Get the SYBASE Technical News The SYBASE Technical News is automatically distributed to all registered Sybase support contacts. If you are one of your company's registered support contacts, and you do not receive this newsletter directly, please call Customer Service at 1-800-8-SYBASE to correct this. Whether or not you are a Sybase support contact, if you would like to receive your SYBASE Technical News in text format by email, send mail to technews@sybase.com to be added to the automatic distribution list. You will receive your newsletter by email at the same time that copies are distributed to comp.databases.sybase and the Sybase OpenLine forum of CompuServe. If you would like to receive hardcopy of the SYBASE Technical News and are not a registered support contact, you may order it from Sybase Customer Fulfillment, as you do other documents. U.S. and Canadian customers may call 1-800-685-8225 or fax 1-617-229-9845. International customers with a U.S license agreement may use the fax number; all other international customers should contact their Sybase subsidiary or local distributor. Ask for document ID 50005-1-0300-03 (this issue). Document ID numbers for future issues will change by volume and issue number; for example: · 50005-1-0300-04 is Volume 3, Number 4 · 50005-1-0400-01 is Volume 4, Number 1 Back issues prior to Volume 3 are not currently available; indeed, much Technical News information is volatile in nature, and items printed in very old issues may well be out of date. Technical Support Customer Satisfaction Survey Since August 1993, Customer Service and Support (CS&S) has been sending monthly case closure satisfaction surveys to customers who have used Technical Support services. About 23 percent of these customers have responded to the surveys, and we appreciate your willingness to do so. Some of the responses have included questions about what we are doing with your feedback and suggestions. We will try to answer those questions in this article. Your ratings from the questions asked are entered and tabulated at the end of each month. We then compile statistical reports to give us a general picture of how we are doing relative to each of the service attributes addressed in the survey. We also distribute the returned surveys to a review team comprised of Technical Support Engineers, who review each month's returns for consistent and critical narrative comments. These comments give us additional data about attributes addressed in the survey and provide data about attributes that we do not specifically address. The combined general picture and narrative comment data is used by the review team to recommend specific corrective actions to CS&S management. Some of the recommendations made by the review team are taken directly from suggestions given in the survey responses. CS&S management reviews and prioritizes the recommendations and acts on them according to their priority. We have already begun to implement improvements based on those recommendations. The statistical reports indicate that 80 percent of you are generally satisfied with the support that you receive. We are pleased, because this is significantly better than we were doing a year ago. However, your written comments identify areas where we can and will further improve our support to you. In future issues of this newsletter, we will be providing more specific information on actions we are taking in response to your feedback, so stay tuned. In the meantime, please keep the feedback and suggestions coming. Avoiding Server Catastrophe Server catastrophe is the inability to recover from a database failure in a timely manner or the inability to recover at all. If your business relies on your SQL Server being operational, then you must be prepared for situations in which your database, your major tables, or your entire Server fail. Familiarize yourself with the backup and restore commands and procedures for your SQL Server. The documentation provided with your SYBASE software is a good starting point. In particular, make sure that you read chapters within the System Administration Guide concerning the use of dbcc commands and procedures for recovering a lost master device or lost master database. This article contains a checklist of some important precautions that you should take in order to avoid a Server catastrophe. SQL Server is a large and sophisticated piece of software. It is not within the scope of this article to describe all possible failure scenarios. However, we hope to help you avoid some of the most common failures that have been reported to the Sybase Technical Support Response Team (the team responsible for handling the most urgent customer SQL Server problems). Checklist · Can you rebuild your master database from scratch? · Can you rebuild your user databases from scratch? · Can you rebuild your largest index in each database? · Can you be certain that your backups are good? · Would your backups be useful in the event of a failure? · Will your system automatically warn you of problems? · Are your devices mirrored? If you answered "no" to any of the questions above or if you are uncertain of the answer or uncertain as to the reason for their importance, then you are at risk of experiencing a SQL Server catastrophe. The following sections of this article will help you to answer "yes" to all of the questions listed above. They will also go some way in describing why you should be able to answer "yes" to every question. Can you rebuild your master database from scratch? There is a large quantity of important information stored in your master database including information about the master database itself! There are also many ways to lose your master database and the backups that you have kept so carefully. As an example, consider this real-life scenario: You have just spent the entire day (12 hours) creating new databases for your developers. Among other things, this procedure involved setting up devices, logins, passwords, and a number of modifications to the sp_configure parameters. You dutifully back up all of your work by doing a database dump of your master database, then leave for home. When you return the following morning, you discover two things: (1) your System Administrator accidently repartitioned the disk on which your master device was created, and (2) the tape on which you stored last night's backup was overwritten by the same person because he "couldn't find another tape to use." Keep up-to-date hard copies of the following tables: · sysdatabases · sysusages · sysdevices · syslogins It's also a good idea to use the bcp utility to keep online copies of these tables. Be sure to use the -c option of bcp; the resulting output is human-readable. An additional reason to use the -c option is that the output can sometimes be reloaded straight into a freshly created master database, but you will need to delete some rows before using bcp to reinsert the information, and this can only be done to output generated under the -c option. These rows are: · syslogins: row for the "sa" login · sysdatabases: rows for master, model, tempdb, sybsystemprocs · sysusages: rows for master, model, tempdb · sysdevices: the row for master device Keep a hard copy of your sp_configure parameters (and, optionally, the output from buildmaster -yall). This is usually not critical, but a hard copy could save you lots of headaches when you try to regain the optimized performance you spent hours attaining. Can you rebuild your user databases from scratch? Can you rebuild your largest index in each database? Some seemingly simple errors can be corrected only by dropping and re-creating objects or dropping and re-creating your entire database. Plan for it. Issue #1 - size of database If your database is too large to back up effectively, consider going to the expense of setting up a "hot backup" server. As soon as a dump tran is completed on your production server, it is immediately loaded onto a duplicate server via a load tran. If your production server fails, your "hot backup" server immediately becomes your production server. Ideally, your "hot backup" server should be located on a separate machine. Sybase's Replication Server is also a option in this situation. With Replication Server, you have the added benefit of automated backups. Issue #2 - size of objects within database Try to ensure that none of your tables are so large that you would not have time to re-create/restore it before your users demanded that the system be available. If your current database design includes any table so large that you can't rebuild it before the next business period, you might consider breaking it into separate tables based on some part of the information already being stored in one or more of the table columns. This may not be feasible for some installations, but we strongly recommend that you look for information within the table that is static and that, with some redesign, can be isolated. Once the static information is isolated, backing up that information is a one-time issue. When failure occurs, it will happen in a much smaller table and can be handled much more easily. Issue #3 - disk space Ideally, you should have enough free disk space that you could replicate your largest database. This may be necessary in situations where a failing device makes it impossible to produce a good database dump, but you need to retrieve as much of the information within the database as possible. The fastest solution could be to create a new database and use the select into command to move the information from the original database to the new database. The original database can then be dropped and the new database renamed, using the sp_renamedb stored procedure. At the very minimum, it is desirable to have enough free space within each of your databases to rebuild your largest index (see your System Administration Guide for details on how to calculate this space). Can you be certain that your backups are good? Your backups are not useful if they contain corruption. For performance reasons, consistency checks are not performed during the dump of a database. A database dump can appear to have completed successfully, and still contain corruption. The corruption won't become apparent until you do a load database. Make sure that you perform dbcc checks regularly. Ideally, you should be checking your entire database immediately before each backup. dbcc checks should contain all of the following checks: · dbcc checkdb(database_name) · dbcc checkalloc(database_name) · dbcc checkcatalog(database_name) Reviewing the output of your dbcc checks can also be automated to some degree (see "Scanning the Error Log Automatically" below). If your database is too large to be checked before each backup, consider using one of the following methods: 1. Perform dbcc checktable and dbcc tablealloc instead of dbcc checkdb and dbcc checkalloc, respectively. Identify tables that are more volatile than others and check those tables more frequently. 2. Perform dbcc checks on a duplicate database. This could be either a "hot backup" server or a one-off copy of the database. 3. If an individual table is too large for regular checking, consider redesigning your database. It is likely that much of the contents of your table consists of static information. Isolate static information; you will not be able to check once and then ignore it completely-hardware errors may corrupt your static information-but you can perform dbcc checks and backups much less often on that information. Would your backups be useful in the event of a failure? When their systems fail, some customers are not able to load their backups because they would lose all work done between completion of the backup and the time of failure. These customers are not backing up often enough. You will need to do backup as often as it takes to be able to say, "I can afford to lose the data. I always have a backup that I can load and then continue operations normally." Doing regular transaction dumps and saving the output will help to ensure that you can use your backups to restore your system to an up-to-the-minute condition. Discarding even one transaction dump will guarantee that you cannot. Will your system automatically warn you of problems? The most obvious means of checking for server problems is by scanning the error log visually, but there are methods that are easier and less error prone. On most systems you can pipe your error log through a filter to detect exception messages. This is not a Sybase supplied feature, but is not a difficult thing to do (see "Scanning the Error Log Automatically" below). Some corruptions, if fixed immediately, will cause few problems. It's the corruption that is allowed to fester for long periods of time that leads to a situation where recovery is difficult. Are your devices mirrored? Mirroring your devices can help to ensure that recovery from some types of system failure will take seconds or minutes instead of hours or days. A disk fragment cannot be dropped from a Sybase database. A disk fragment cannot be rebuilt separately from the rest of the database. If one of your devices fails and you have not mirrored that device, you will be forced to drop and re-create that entire database. Can you afford to have your users wait that long? Probably it would be far less expensive to maintain mirrored devices. Scanning the Error Log Automatically Following is an example of a UNIX C shell script that might be used to form part of an "early warning system". You might also wish to add some code to ensure that the DBA is not alerted for certain common or informational messages that appear in your error log (hint: use the -v option of the grep or egrep utility). * begin code sample ======================= #!/bin/csh # file: scan_log # author: Greg Klinder # Sybase, Inc. # # purpose: A C shell script for warning the Database Administrator # about potential problems with the Sybase SQL Server. # # The reliability of this script for alerting the DBA to potential # server problems depends entirely on the contents of the WORRIES # file. This file contains a list of regular expressions which, # hopefully, can be used to correctly identify *most* serious # errors that might appear in the error log. set DBA_ALIAS=(ourdba@oursite.com otherdba@theirsite.com) set ERRORLOG=/usr/sybase/install/errorlog set FERR_FILE=/tmp/filtered_err set WORRIES=/usr/ourdba/regexp # ------------------------------------------------------------- # Check to see whether temporary file exists. If so, remove it. # ------------------------------------------------------------- if (-f ${FERR_FILE} ) then rm ${FERR_FILE} endif # --------------------------------------------------------- # Filter out all messages within the errorlog that might be # of concern to the database administrator. Save them to a # temporary file. # --------------------------------------------------------- if (-f ${WORRIES} && -f ${ERRORLOG} ) then egrep -f ${WORRIES} ${ERRORLOG} >${FERR_FILE} else echo ${ERRORLOG} | mail -s "scan of server error log failed" ${DBA_ALIAS} exit 1 endif # ----------------------------------------------------------- # If the temporary file is not empty, then we know that there # are messages within the errorlog that should be examined. # Alert the database administrator(s) by email. # ----------------------------------------------------------- if ( ! -z ${FERR_FILE} ) then mail -s "problems in error log ${ERRORLOG}" ${DBA_ALIAS} <${FERR_FILE} endif * end code sample ========================= Here is an example of what you might put in the WORRIES file: _WARNING:_ _WARNING -_ Error: Msg Most Commonly Made Serious Mistakes · Using dump tran ... with no_log in some situations can cause corruptions. Whenever possible, use dump tran ... with truncate_only instead. If the log is 100 percent full and truncate_only produces an 1105 error, then use no_log. · Doing dump tran ... with truncate_only or dump tran ... with no_log will render all subsequent dump tran useless. Each dump tran is only as good as the previous one. · Backing up your SYBASE devices at operating system level is useless unless the backup was done while SQL Server was down and you plan to restore all devices simultaneously. Additionally, OS-level device backup is not supported. · Your server will not be very forgiving when it comes to mismatches between the contents of the sysusages table at the time of dump database and the contents of the sysusages table at the time of load database. In Servers before System 10, you will get 2558 errors from dbcc checkalloc; in SQL Server 10.0, the load will change sysusages and you may not find the output useful. · By dropping a user database, recreating it, and then loading the most recent dump, you will still be missing all of the information generated when you used the sp_extendsegment stored procedure (the information is stored in the sysusages table in master, not in any tables within the user database). You will need to repeat all of the sp_extendsegment commands. If you neglect to do so, you may end up with occurrences of the 1105 error. If you created user-defined segments, you must recreate them before you attempt to reload. · In general, we recommend that you put the log for any user database on its own device. Syntax Change for RAISERROR Under the old RAISERROR syntax, it is very difficult to distinguish where the parameters for the message string end and where the Extended Error Data (EED) arguments begin. They are distinguishable only by the presence or absence of a comma. A new syntax change planned for an upcoming System 10 SQL Server Rollup results in an error message that is more readable and more easily maintained by customers and Technical Support staff. The old syntax was: raiserror error_number [{format_string | @local_variable}] [, arg_list] [extended_value = extended_value [{extended_value = extended_value}...]] The new syntax is: raiserror error_number [{format_string | @local_variable}] [, arg_list} [with errordata restricted_select_list] where restricted_select_list can follow the standard select_list syntax rules as mentioned in the raiserror section of Volume 1 of the SQL Server Reference Manual, with the restriction that no from, where, or other select clauses can be included. This means that wildcard characters cannot be used. Use of EED, as described in the SQL Server Reference Manual, will stay the same. The third example in the examples section of that manual should now read: 3. raiserror 20100 "Login must be at least 5 characters long" with errordata "column" = "login", "server" = @@servername According to the new syntax, the following variants of raiserror are allowed, have been exercised, and are verified as functional: raiserror 25001 raiserror 25001 "formatted string" raiserror 25001 @variable raiserror 25001 "formatted string %1!","foo" raiserror 25001 @variable,"foo" raiserror 25001 "formatted string %1! with %2!","foo",@bar with errordata ExtendedValue=5,"ExtVal"="Extval",3+4 raiserror 25001 @variable,"foo",@bar with errordata ExtendedValue=5,"ExtVal"="ExtVal",3+4 raiserror 25001,"foo" raiserror 25001,"foo",@bar raiserror 25001,"foo",@bar with errordata ExtendedValue=5 raiserror 25001,"foo",@bar with errordata ExtendedValue=5,"ExtVal"="Extval",3+4 raiserror 25001 with errordata ExtendedValue=5 raiserror 25001 with ExtendedValue=5,"ExtVal"="ExtVal",3+4 This change will be effective with a Rollup scheduled for release later in 1994. When inquiring for the status of this Rollup, mention bug # 51600. Changing Sort Order Manually When sybinit Fails During a new installation of SQL Server 10.0.1 for HP9000/800, some customers have encountered a bug, 56745, when attempting to change the default sort order from binary to dictionary case insensitive in sybinit. The following errors are raised: CONNECTIVITY ERROR: Error sending SQL to server `SYBASE' SERVER ERROR: `sp_configure default sortorder id' failed. If you encounter this problem, you must change the sort order manually according to the following method. The example below demonstrates conversion of the sort order from binary to nocase under the roman8 character set. The available sort orders for a character set may be found in $SYBASE/charset/charset_name, where charset_name is the name of your character set. For example: cd $SYBASE/charsets alder% ls -C ascii_8 cp437 cp850 eucjis iso_1 mac roman8 sjis alder% ls -C roman8 binary.srt dictionary.srt nocase.srt charset.loc noaccents.srt nocasepref.srt The available sort orders have a .srt filename extension. The steps to change the sort order are: 1. At the isql prompt, execute the command dump tran master with truncate_only. Make sure you have some space available in master so that you will not have problems rebuilding the system table indexes. If space is tight then alter database master before proceeding further. 2. Shut down the SQL Server. 3. If your SYBASE devices are on UNIX file systems, issue three sync commands at the shell prompt so that the OS buffer cache gets flushed. 4. Reboot the SQL Server in single-user mode. 5. Execute the following commands in isql: 1> select name, id, type from syscharsets 2> go Your output should look something like this: name id type _____________________________________________________________ ascii_8 0 1001 roman8 4 1001 nocase_roman8 22 2001 bin_roman8 50 2001 If you see "nocase_roman8", then type: 1> sp_reconfigure `default sortorder id', 22 2> go 1> reconfigure with override If you do not see "nocase_roman8", then: · Exit isql · Execute this command from the shell prompt: $SYBASE/bin/charset -Usa -Ppassword -Sservername nocase.srt roman8 · Go into isql and type: 1> sp_reconfigure `default sortorder id', 22 2> go 1> reconfigure with override 6. Shut down the SQL Server 7. If your SYBASE devices are on UNIX file systems, issue three sync commands at the shell prompt so that the OS buffer cache gets flushed. 8. Reboot the SQL Server. This will rebuild the system indexes and shut down SQL Server automatically. 9. Reboot the SQL Server again. At this point the Server should be configured for nocase for the roman8 sort order. Verify this by looking at the errorlog after the boot. You will need to run sp_indsuspect against each database to see if there are any indexes that must be rebuilt because of the sort order change. Please refer to Chapter 17 of your System Administration Guide for instructions. sysdevices `status' Control Bits The status column in the sysdevices table is a bit map indicating the type of device, default, and mirror status. The status control bits are: Decimal Hex Status ------- ---- ------ 1 0x01 Default disk 2 0x02 Physical disk 4 0x04 Logical disk 8 0x08 Skip header 16 0x10 Dump device 32 0x20 Serial writes 64 0x40 Device mirrored 128 0x80 Reads mirrored 256 0x100 Secondary mirror side only 512 0x200 Mirror enabled 1024 0x400 Device information in configuration area 2048 0x800 Mirror disabled Examples: A value of 2275 (0x8E3) in the status column of sysdevices means that the device is: · 2048 0x800 Mirror disabled · 128 0x080 Reads mirrored · 64 0x040 Device mirrored · 32 0x020 Serial writes · 2 0x002 Physical disk · 1 0x001 Default disk A value of 2242 in the status column of sysdevices represents the following: · 2048 0x800 Mirror disabled · 128 0x080 Reads mirrored · 64 0x040 Device mirrored · 2 0x002 Physical disk Bug fix 31027 introduced the new status bit value 2048. use database Hanging After Upgrade A few customers are experiencing intermittent hanging after upgrading to 10.0 SQL Server when they execute a use database command. No bug has yet been assigned to this behavior, because so far the problem is intermittent enough that we have not been able to reproduce it either at the customers' sites or in Technical Support. To avoid the possibility of this problem occurring, make sure that the configuration value for open databases is at least equal to the number of actual databases present on your SQL Server. You can change this value with sp_configure; refer to Chapter 12 of your System Administration Guide for further details. Float and Money Display Methods Occasionally, a customer may want to display more than the default six significant digits to the right of the decimal point of a float or money value. One method that a customer might find intuitive, select convert (char(20), floatcol), does not work. However, as of the 4.9.1 release of SQL Server, there is a way to display more digits. Using the str function documented in the Commands Reference Manual, you can display more digits like this: select str(floatcol, length1, length2) from table length1 is the total number of characters to return, including the decimal point, blank spaces, and digits to the right and left of the decimal. length2 is the number of digits to the right of the decimal point. For example, the command: select str(discount,14,11) from pubs2..salesdetail where title_id = `PC1035' and ord_num = `124152' will return the string 50.500000000000. The function adds extra zeros to the end of the number as padding if there are fewer than the requested number of digits to the right of the decimal point. Disk Mirroring Clarification The System Administration Guide for SQL Server release 4.9.2 contains the following statement: "... SQL Server reads from the disk where the last I/O was `closest' to the current read request." This statement is not in the System Administration Guide for release 10.0, leading to the frequently asked question, "Is this true for System 10? Has it ever been true? How could this work?" In fact, this feature was planned for release 4.8, but was never implemented, due to the differences in hardware from platform to platform. The plan was mentioned in some marketing documents, and slipped into pre-System 10 System Administration Guide manuals. The System 10 System Administration Guide corrects this erroneous information. Limitation on RPCs Called from Threshold Procedures Threshold procedures in System 10 can make remote procedure calls to a remote server, but only if it that server is an Open Server. A remote procedure called from a threshold procedure cannot reside on a SQL Server. The reason is that a valid user/password combination must be provided to the remote SQL Server. SQL Server no longer stores passwords in plain text; rather, it encrypts them with a one-way algorithm. A threshold procedure executes with the user ID (uid) of the user that called sp_addthreshold, not the uid of the user that caused the threshold to be exceeded. The calling server is unable to provide the called server with a valid user/password combination, as it cannot decrypt the encrypted version of the password kept in syslogins for that uid. Remote Open Servers, which do not use this user/password mechanism, can execute RPCs from threshold procedures. Backup Server Context Allocation Errors Sybase Technical Support gets many calls from customers who see the following messages: Backup Server: 1.20.4.1: CS_CONTEXT allocation failed Backup Server: 1.15.4.1: UNRECOVERABLE CONDITION: ALL SESSIONS WILL TERMINATE ABNORMALLY. THE BACKUP SERVER MUST EXIT These errors indicate that the context allocation routine failed when it tried to load localization files. One or more of the following problems may have caused these errors: · The SYBASE environment variable may be incorrectly set. · The LANG environment variable may be set to "C" (the default), which does not exist in your locales.dat file. There are two ways to correct this second situation: · Use the UNIX command unsetenv LANG, or · Add the following line to your $SYBASE/locales/locales.dat file: locale = C, us_english, iso_1 TLI Address Translation Addendum In Volume 3, Number 2 of SYBASE Technical News, a section on TLI network addressing incompletely explains the translation of the hexadecimal address number. This is the actual translation of the following example: \x000207d082d6330d0000000000000000 · \x is the hex indicator · 0002 is the address family · 07d0 is the port number · 82d6330d is the network node (IP) address The optional trailing zeros are present in this example. The address family identifies the format of the network address. The number "2" means "internetwork" (UDP, TCP, and so on). The entire set of families is given in /usr/include/sys/socket.h. For example, DECnet is 12; AppleTalk is 16. This does not mean that a machine supports all of these protocols; it simply identifies the format of the address. While the byte order of the port number and network address is defined in some specifications for IP address (and does not vary from platform to platform), the byte order of the "address family" is not fixed. Some machines with the opposite byte order from the Sun4 (DEC and Intel machines) may have 0200 rather than 0002 (the bytes are swapped). OpenVMS Mount from CD-ROM In Volume 3, Number 2 of the SYBASE Technical News, we described how to avoid 803 errors when installing from CD-ROM by ensuring that you use the correct mount command. As of the System 10 release of SQL Server, distribution of software by CD-ROM is available for OpenVMS as well as OSF. The mount command is: mount device_name sybase cdrom Please refer to Volume 3, Number 2 of the SYBASE Technical News for further information about the 803 error. Sybase Compatibility with RAID There has been some confusion about whether Sybase software works with RAID disk technology. This issue has two sides: · Will the software's asynchronous I/O code work with RAID disks? · Has Sybase certified products on a platform using RAID disks? SYBASE Async I/O and RAID The way SQL Server uses asynchronous I/O on the AT&T (NCR) platform, for example, is simply to use supplied system calls to issue an asynchronous I/O request. Then it polls for the I/O completion in a prescribed manner; if an I/O error is returned, the SQL Server reports and corrects for it. You configure, with disk init, the device to which the I/O is actually done; it is usually also a device entry in /dev. From the SQL Server's perspective, there is a device file (/dev/xxxx) that it can open and to which it can direct I/O requests. The SQL Server has no knowledge of the type of device that is represented by the device entry in /dev. It could be a single partition of a disk or some other abstraction such as a RAID disk. As with any device, buffered I/O may cause complications for data recovery. Certification Issues After an operating system vendor, such as SunSoft, tests and certifies its new hardware devices for a new OS version, Sybase tests SQL Server to ensure that it is compatible with the new OS. If a RAID device is supported by the OS as a standard disk device, then SQL Server will support the RAID disk subsystem, since SQL Server is certified against the OS rather than against the disk subsystem. It is not possible to recommend one RAID system over another, since the results are system and application dependent. bcp Programming and Paging Using bcp_init, bcp_bind, and bcp_done in a 3GL program can cause each row to be started on a new page, even though the row might not be a full page in length. If your function to load the data into the database is called for each row until EOF, and in that same function are bcp_init, bcp_bind(s), and bcp_done, you may see this problem. This is because having initialization (bcp_init) occur each time a row is to be added causes each row to be stored on a new page. This is not documented in the Open Client/DB-Library Reference Manual. Instead of using all three calls in the same function, run bcp_init and bcp_done outside the function that does the bcp_bind(s). Be aware that now either all the data is sent when bcp_done is executed, or nothing is sent if the program crashes before bcp_done. SQL Monitor Server and ceventbuf Although it is documented only in the "Special Considerations" section of the SQL Monitor Server Supplement, changing the ceventbuf value on the SQL Server is a required step in setting up the SQL Monitor Server. It is important that you heed the information regarding ceventbuf values in the SQL Monitor Server Supplement; there is an algorithm listed there that will help you calculate the size to which you should set your ceventbuf value. The default ceventbuf value for SQL Server is 100; we suggest a minimum of 2000 for ceventbuf in order to use SQL Monitor. Changing this value is required by SQL Monitor, not by SQL Server. For windows that depend on the event buffer size (see the SQL Monitor Server Supplement for this information), we recommend a low sample interval for SQL Monitor Client of 5-10 seconds. SQL Monitor Cheat Sheet The order in which to start the three processes involved in monitoring your SQL Server is: 1) SQL Server, 2) SQL Monitor Server, and 3) SQL Monitor Client. If SQL Server is rebooted, then both the SQL Monitor Server and Client must be rebooted. The process details are: 1. Start the SQL Server as user "sybase". Use the -M flag to specify the location of the server_name.krg file. (See the SQL Server Installation Guide for your platform.) For example: setenv SYBASE /usr/local/system10 setenv DSLISTEN SYBASE10 $SYBASE/bin/dataserver -d/sun4db/system10db/SYBASE10_master.dat -e$SYBASE/install/errorlog_SYBASE10 -M$SYBASE & 2. Start up the SQL Monitor Server as the same "sybase" user, on the same host machine, with _no_spaces_ between the flag and the flag's value. Using the -i flag, point to an interfaces file that has both the SQL Server entry and the SQL Monitor Server entry; specify the same location of the SQL Server .krg memory file using the -m flag. Be sure to start SQL Monitor Server on the same machine as SQL Server because SQL Monitor Server has to access the same shared memory region. $SYBASE/bin/monserver -MMON10_SYBASE10 -SSYBASE10 -Usa -Ppassword -i$SYBASE/interfaces -l$SYBASE/install/MON10_SYBASE10.LOG -n -m$SYBASE -O & NOTE: The two flags, -M and -m, have different meanings. See the SQL Monitor Server Supplement for complete start-up instructions. 3. Start the client as any user, on any client machine, _using_spaces_ between each flag and its value. The -i flag must point to an interfaces file that has both the SQL Monitor Server entry and the corresponding SQL Server entry that was passed on the monserver command line. The SQL Server name must be the same name that appears in the interfaces file (or .INI for PCs). It cannot be an entry that has the same host and port number but a different SQL Server name. (See the SQL Monitor Release Reference Manual for complete start-up instructions.) UNIX: /usr/directory/bin/sqlmon -M MON10_SYBASE10 -U sa -P password -I\x11/usr/directory/interfaces & PC: _C:\SERVMON\SERVMON.EXE_ NOTE: The Sybase-supplied sqldl.dll library is installed in \WINDOWS\SYSTEM with SQL Monitor release 10.1.0 for Windows, but in \SERVMON with SQL Monitor release 10.1.2 for Windows. A user must have sa_role privilege to access release 10.0 or 10.0.1 SQL Monitor Server or SQL Server. Use the following command to grant that privilege to a user: exec sp_role `grant', sa_role, `username' Special Notes for PC Users - SQL Monitor Client SQL Monitor Client opens up two network connections for each open window: one to SQL Monitor Server and one to SQL Server. PC users will be affected most and will need to configure their network software to handle more network connections. For example, if you are using FTP's PC/TCP, with its default configuration of six network connections, this limit will be reached with three SQL Monitor windows. An attempt to open a fourth window will result in an error: SERVMON Cannot access Server - retry after closing an existing Monitor Window You may also see the above error message when the number of open connections for the SQL Monitor Server has been exceeded. In that case, you must restart the SQL Monitor Server with a larger -n value. Please refer to your network documentation for specific information on how to increase your network connections. To continue the FTP PC/TCP example, you must edit the "tcp-connection=" entry in the [pctcp kernel] section of the \PCTCP\PCTCP.INI file and then reboot your PC. SQL Monitor Incompatibility Table In Volume 3, Number 2 of the SYBASE Technical News, we published a table of uncertified combinations of SQL Server and SQL Monitor Server that contained a typographical error. Here is the corrected table: SQL Monitor Server | SYBASE SQL Server _________________________________________________________________ 4.9.1 | 10.0.0 (this combination may work, but is not certified) _________________________________________________________________ 4.9.1 | 10.0.1 _________________________________________________________________ 10.0.0 | 10.0.1 _________________________________________________________________ 10.0.1 | 4.9.x or 10.0.0 Corrections to NetWare Installation Manual According to the SQL Server Installation Guide for Novell NetWare, the first step of the installation process is to update the NetWare operating system NLMs. This step is necessary if the NetWare NLMs that are currently running, prior to installation, are older then the NLM versions listed in the Release Bulletin. This step is _not_ required if you have more recent NetWare NLMs. This includes the newer versions of NetWare 3.12 and 4.01. When SQL Server 4.2.2 for NetWare was released in March 1993, most customers were installing on NetWare 3.11. When Sybase came out with this version, Sybase included more current versions of the NetWare NLMs on the distributed diskettes so that customers could easily upgrade to 4.2.2 without having to track down the correct NetWare NLMs. Today, one year later, the NetWare NLMs have gone through many revisions, and the NetWare NLMs on the Sybase diskettes are no longer the most current. Therefore, you should skip that updating step so as not to override the newer NLMs. The recommended installation steps should begin with the section entitled "Step Two: Start SQL Server Installation". After the SQL Server is installed and configured, we recommend you copy over the most recent Sybase Rollup (Rollup 3183 as of July 1, 1994), and download the following files from Compuserve to bring your NetWare fileserver up to the latest NetWare NLM versions. The tables which follow the lists of files show the specific NLMs to download for each NetWare version. Recommended list of NLMs to Load for NetWare 3.11 Download these files from CompuServe: · STRLI2.exe (novlib forum) · LIBUP2.exe (novlib forum) · DFS108.exe (novlib forum) DIR Listing Information MODULES List _________________________________________________________________ NLM Name | Size | Date | Module | Date _________________________________________________________________ sys:\system directory: _________________________________________________________________ clib | 328,124 | 2/24/94 | 3.12f | 2/24/93 mathlib | 12,458 | 2/24/94 | 3.12f | 2/24/93 mathlibc | 16,832 | 2/24/94 | 3.12f | 2/24/93 a3112 | 11,371 | 1/7/94 | 4.00a | 1/7/94 after311 | 14,411 | 1/7/94 | 4.00a | 1/7/94 streams | 53,566 | 7/20/93 | 3.12 | 7/20/93 spxs | 24,145 | 9/14/93 | 3.12a | 9/14/93 tli | 12,474 | 9/14/93 | 3.12a | 9/14/93 ipxs | 8,149 | 8/10/93 | 3.12a | 8/10/93 spxddfix | 1,636 | 9/20/93 | 1.00 | 9/20/93 spxfix2 | 1,599 | 8/20/93 | 2.10 | 8/20/93 spxfsfix | 1,155 | 8/20/93 | 2.10 | 8/20/93 spxlisfx | 1,016 | 8/20/93 | 1.10 | 8/20/93 xmdfix | 1,496 | 9/15/92 | 1.02 | 9/15/92 patchman | 9,632 | 2/04/93 | 2.30 | 2/4/93 directfs | 16,740 | 7/14/93 | 1.08 | 7/14/93 sybstubs | 901 | n/a | 4.22 | 3/16/93 sys:\sybase\nlms directory _________________________________________________________________ sqlsrvr | 1,192,998| n/a | 4.22 | 1/18/94 Recommended List of NLMs to Load for NetWare 3.12 Download these files from CompuServe: · STRLI2.exe (novlib forum) · LIBUP2.exe (novlib forum) · DFS108.exe (novlib forum) · 312IT1.exe (nsd forum) DIR Listing Information MODULES List _________________________________________________________________ NLM Name | Size | Date | Module | Date _________________________________________________________________ sys:\system directory: _________________________________________________________________ clib | 328,124 | 2/24/94 | 3.12f | 2/24/93 mathlib | 12,458 | 2/24/94 | 3.12f | 2/24/93 mathlibc | 16,832 | 2/24/94 | 3.12f | 2/24/93 a3112 | 11,371 | 1/7/94 | 4.00a | 1/7/94 after311 | 14,411 | 1/7/94 | 4.00a | 1/7/94 streams | 53,566 | 7/20/93 | 3.12 | 7/20/93 spxs | 24,145 | 9/14/93 | 3.12a | 9/14/93 tli | 12,474 | 9/14/93 | 3.12a | 9/14/93 ipxs | 8,149 | 8/10/93 | 3.12a | 8/10/93 spxddfix | 1,636 | 9/20/93 | 1.00 | 9/20/93 pm312 | 8,909 | 11/11/93 | 1.11 |11/11/93 directfs | 16,740 | 7/14/93 | 1.08 | 7/14/93 sybstubs | 901 | n/a | 4.22 | 3/16/93 sys:\sybase\nlms directory _________________________________________________________________ sqlsrvr | 1,192,998| n/a | 4.22 | 1/18/94 NOTE: These files are as of March 1994; NetWare NLMs will continually need to be updated, and we recommend monitoring CompuServe for any release notices. Please refer to the *.txt files included with the downloaded files from CompuServe for detailed information on the purpose of each NetWare _NLM._ Correction to NetWare "Zombie" Article As the above article shows, the name of STRTLI.EXE has been changed to STRLI2.EXE. This file is one of the files recommended for download to forestall the appearance of "zombie" processes on the SQL Server NLM. We apologize for any confusion this may have caused. Changes in PC Open Client/C Packaging Customers will shortly be seeing a change in the way Open Client/C release 10.0.1 is packaged for MS-DOS, Windows, OS/2, and Windows NT. The new packaging includes all System 10 Net-Library drivers. This eliminates the need to order separate Net-Library packages for each platform and brings packaging for PC platforms in line with the existing method of packaging for UNIX and OpenVMS platforms. All repackaging should be complete before the desktop mass-shipment scheduled for maintenance release 10.0.3 and after completion and release of DB-Library release 10.0.1 for each platform. The 10.0.3 maintenance release and mass update, and the final production release of DB-Library 10.0.1 are scheduled for Q4 of 1994. Bug 54192/34245 - Optimizer In the latest 4.9.2 SQL Server Rollup, the query optimizer looks at the new multi-column densities only if a special trace flag is set; in SQL Server release 10.0, the optimizer always looks at the new multi-column densities. A fix for bug 34245 in the 4.9.2 SQL Server addressed an optimizer problem with multi-column joins by providing multiple densities for multi-column indexes. Prior to this fix, the join selectivity (proportion of the table to be scanned for each outer row) was estimated by a number referred to as the join's "density": the average proportion of duplicates in the index. The density took the entire index key into account, regardless of whether the query was joining on the entire index key or a subset of the index key columns. This fix for bug 34245 greatly improved performance for some queries-those for which the optimizer had previously selected very costly indexes because of the density limitation. On the other hand, the fix did occasionally cause the optimizer to generate some potentially less effective plans. A new bug was entered for this problem, 54192, which makes the multiple-density feature available only if the SQL Server is started with trace flag 321. Customers upgrading to System 10 should be aware that trace flag 321 is not a part of System 10, and that they may encounter the problems for which bug 54192 was entered. Following are an explanation and suggestions to work around the problem. The Nature of the Bug Customers doing queries with one or more joins may now see that an index is much less selective on the join columns of some or all indexes. This can result in the index not being used, and an alternate plan being chosen such as using a different index, choosing a different join order, or reformatting to a work table. In some cases, the new plan can be slower than a previously chosen plan, often as the result of "join skew". Join skew occurs when the optimizer does not know the specific values that will be joined at run time, and must use an estimate for the average number of rows that will join to a typical value. If the actual number of rows that join is significantly less than the average number used by the optimizer, then the new plan may not be as fast as the plan chosen before this fix. The resolution to these problems is to determine which indexes are demonstrating very poor join selectivity, and modify the schema to include indexes that have good join selectivity on the joining columns of the query. For example, using this schema: create index idx on tab (col1,col2,col3,col4) this index will have four sets of densities, on: col1 col1,col2 col1,col2,col3 col1,col2,col3,col4 For a query joining on col1, col3, and col4, a density of (col1)will be used to estimate join selectivity for tab. If col1 has 100 percent density (all duplicates), the join selectivity will be very poor, and the optimizer will not select this index even though (col3,col4) may be very selective. Modify the schema as follows: create index idx_c3c4 on tab (c1,c3,c4,c2) This way, for a query joining on col1,col3,col4 the density of (col1,col3,col4) will be used to estimate join selectivity for tab. Regardless of the poor join selectivity of (col1), the index will be regarded as highly selective by the optimizer because (col1,col3,col4) is very selective. Bug 55721 - sp_estspace A customer discovered that entering a different value for iosec in sp_estspace has no impact on the time_mins value. For example 1> sp_estspace titles,10000,50, 2> "title,notes",0,10000 returns a value of 13 in the time_mins column. Changing the iosec value to 25: 1> sp_estspace titles,10000,50, 2> "title,notes",25,10000 causes no change in the time_mins column. This is because when iosec is initially declared, it is set to 30. The solution is to delete that line in the code. A new version of sp_estspace, included in the comp.databases.sybase FAQ on Usenet news, contains this correction. Bug 56619 - sp_columns Bug 56619 was recently reported in SQL Server 10.0.1, for the case in which sp_columns does not report on columns with user-defined datatypes. Customers who encounter this bug can contact Technical Support for a new script to create sp_columns; additionally, the script is available on the OpenLine forum of CompuServe (GO SYBASE). Isolate the create sp_columns script from the installmaster script provided with 10.0.1 and make the following changes: Remove the line AND t.name = d.type_name Replace it with AND t.type = d.ss_dtype This line appears in the sp_columns definition, just below this comment: We need an equality with sybsystemprocs.dbo.spt_datatype_info here so that there is only one qualified row returned from sybsystemprocs.dbo.spt_datatype_info. Drop sp_columns and run the new script. Q5.3: TECHNICAL NEWS Volume 4, Number 1 February, 1995 _________________________________________________________________ Disclaimer: No express or implied warranty is made by SYBASE or its subsidiaries with regard to any recommendations or information presented in SYBASE Technical News. SYBASE and its subsidiaries hereby disclaim any and all such warranties, including without limitation any implied warranty of merchantability of fitness for a particular purpose. In no event will SYBASE or its subsidiaries be liable for damages of any kind resulting from use of any recommendations or information provided herein, including without limitation loss of profits, loss or inaccuracy of data, or indirect special incidental or consequential damages. Each user assumes the entire risk of acting on or utilizing any item herein including the entire cost of all necessary remedies. Staff Principal Editor: Leigh Ann Hussey Contributing Writers: Brijesh Agarwal, Perry Bent, Siva Chandrasekharan, Scott Duncan, Aimee Grimes, Cris Gutierrez, Chandrika Krishnan, Cristina Nitescu, Daniel O'Rourke, Pearl Ong, Chuck Pavelka, Gerald Soulos, Loretta Vibberts, Robert Weaver, Elton Wildermuth, Melissa Yeary Send comments and suggestions to: SYBASE Technical News 6475 Christie Avenue Emeryville, CA 94608 This issue of the SYBASE Technical News contains new information about your SYBASE software. If needed, duplicate this newsletter and distribute it to others in your organization. All issues of the SYBASE Technical News and the Troubleshooting Guides are included on the AnswerBase CD. _IN THIS ISSUE:_ Technical Support News/Features Customer Site Visits Planned Troubleshooting Guide Update SQL Server Error 428 or 4408: Referential Integrity and the 16-Table Limit The Identity Column, Definition and Purpose Problems with sybload on Solaris Note on Intel Pentium Floating Point Bug Problems with Backup Server and AIX 3.2.5 Hazards of Copying Dumps with AIX Utilities Connectivity / Tools / PC Open Client/MF: Undocumented Return Code PowerBuilder AutoCommit and Open Transactions Stratus VOS and TCP Certification and Fixed Bug Reports Bug 63348 - kill on Multiprocessor Servers Bug 62149 - Stored Procedure with Variable and compute Clause Raises Error 411 MAP Doc Corrections Latest Rollups for SQL Server 4.9.2 Bugs Fixed in Latest Rollup, SQL Server 4.9.2 Latest Rollups for SQL Server 10.0.2 Bugs Fixed in Latest Rollup, SQL Server 10.x Customer Site Visits Planned Technical Support Publications wants to learn how you use Sybase products and documents, so that we can give you better technical documentation. If you are located in the Bay Area and would be willing to have Sybase employees visit your workplace, send mail to technews@sybase.com. If a visit is arranged, a small group of writers and engineers will visit your site and observe how our products and documents work (or don't work) for you. We want to hear your concerns. We're here to help! Troubleshooting Guide Update The format of the SQL Server Troubleshooting Guide has changed, to be more useful to you. The next edition will appear in two volumes, SQL Server Error Messages, and the SQL Server Troubleshooting Guide (for general troubleshooting information). Both manuals are available on the volume 2, number 2 AnswerBase CD (due for shipment in May). Both manuals will also be available in the SyBooks System 10 CD update. The Tools and Connectivity Troubleshooting Guide, last updated in December, makes its final appearance as a single volume in the Q2 AnswerBase CD. Because of the diversity of products covered by this guide, and the expansion of some of the sections, in the future we will publish separate AnswerBase FAQ lists for each product. We will introduce this format in the Q3 AnswerBase CD. Error 428 or 4408: Referential Integrity and the 16-Table Limit Question Update to a single table in SYBASE SQL Server release 10.x fails with Error 428: Too many table names or referential constraints in the query, maximum allowed table references is '16'. In 4.9.x, Error 4408 may be raised: The query and the views in it exceed the limit of 16 tables. Why does this happen, if I am only updating one table? Answer The limit on the number of tables any given query can touch is 16. A query on a table with referential constraints, that is, many primary and foreign keys, can refer to more tables than the limit allows. When you hit the limit with such a table, the update fails. For example, suppose there are 15 tables, named c1-c15, each one of which has one primary key, a reference to another table, p, and a foreign key constraint referring to one of 16 other tables. Each one of the tables c1-c15 is built like this: create table c1 (ccol1 smallint, ccol2 smallint, primary key (ccol1), constraint pc1 foreign key (ccol2) references p (pcol1)) go p is built like this: create table p (pcol1 smallint, pcol2 smallint, primary key (pcol1)) go And the remaining 16 tables (p1-p16) are built like this: create table p1 (pcol1 smallint, pcol2 smallint, primary key (pcol1)) go Now build another table, c, that refers to all 31 other tables, like this: create table c (pcol1 smallint, ccol2 smallint, ccol3 smallint, ccol4 smallint, ccol5 smallint, ccol6 smallint, ccol7 smallint, ccol8 smallint, ccol9 smallint, ccol10 smallint, ccol11 smallint, ccol12 smallint, ccol13 smallint, ccol14 smallint, ccol15 smallint, ccol16 smallint, ccol17 smallint, primary key (pcol1), constraint c2p1 foreign key (ccol2) references p1 (pcol1), constraint c3p2 foreign key (ccol3) references p2 (pcol1), constraint c4p3 foreign key (ccol4) references p3 (pcol1), constraint c5p4 foreign key (ccol5) references p4 (pcol1), constraint c6p5 foreign key (ccol6) references p5 (pcol1), constraint c7p6 foreign key (ccol7) references p6 (pcol1), constraint c8p7 foreign key (ccol8) references p7 (pcol1), constraint c9p8 foreign key (ccol9) references p8 (pcol1), constraint c10p9 foreign key (ccol10) references p9 (pcol1), constraint c11p10 foreign key (ccol11) references p10 (pcol1), constraint c12p11 foreign key (ccol12) references p11 (pcol1), constraint c13p12 foreign key (ccol13) references p12 (pcol1), constraint c14p13 foreign key (ccol14) references p13 (pcol1), constraint c15p14 foreign key (ccol15) references p14 (pcol1), constraint c16p15 foreign key (ccol16) references p15 (pcol1), constraint c17p16 foreign key (ccol17) references p16 (pcol1)) go sp_helpconstraint on p gives this output: name defn _________________________________________________________________ pc1 c1 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc10 c10 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc11 c11 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc12 c12 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc13 c13 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc14 c14 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc15 c15 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc2 c2 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc3 c3 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc4 c4 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc5 c5 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc6 c6 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc7 c7 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc8 c8 FOREIGN KEY (ccol2) REFERENCES p(pcol1) pc9 c9 FOREIGN KEY (ccol2) REFERENCES p(pcol1) p_7325296431 PRIMARY KEY INDEX ( pcol1) : CLUSTERED, _FOREIGN REFERENCE_ (16 rows affected, return status = 0) sp_helpconstraint on c gives this output: name defn _________________________________________________________________ c10p9 c FOREIGN KEY (ccol10) REFERENCES p9(pcol1) c11p10 c FOREIGN KEY (ccol11) REFERENCES p10(pcol1) c12p11 c FOREIGN KEY (ccol12) REFERENCES p11(pcol1) c13p12 c FOREIGN KEY (ccol13) REFERENCES p12(pcol1) c14p13 c FOREIGN KEY (ccol14) REFERENCES p13(pcol1) c15p14 c FOREIGN KEY (ccol15) REFERENCES p14(pcol1) c16p15 c FOREIGN KEY (ccol16) REFERENCES p15(pcol1) c17p16 c FOREIGN KEY (ccol17) REFERENCES p16(pcol1) c2p1 c FOREIGN KEY (ccol2) REFERENCES p1(pcol1) c3p2 c FOREIGN KEY (ccol3) REFERENCES p2(pcol1) c4p3 c FOREIGN KEY (ccol4) REFERENCES p3(pcol1) c5p4 c FOREIGN KEY (ccol5) REFERENCES p4(pcol1) c6p5 c FOREIGN KEY (ccol6) REFERENCES p5(pcol1) c7p6 c FOREIGN KEY (ccol7) REFERENCES p6(pcol1) c8p7 c FOREIGN KEY (ccol8) REFERENCES p7(pcol1) c9p8 c FOREIGN KEY (ccol9) REFERENCES p8(pcol1) c_3610523221 PRIMARY KEY INDEX ( pcol1) : CLUSTERED (17 rows affected, return status = 0) An attempt to insert or delete will give the following result: 1> delete p where pcol1 = 1 2> go Msg 428, Level 16, State 1: Line 1: Too many table names or referential constraints in the query, maximum allowed table references is '16'. 1> insert c values (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17) 2> go Msg 428, Level 16, State 1: Line 1: Too many table names or referential constraints in the query, maximum allowed table references is '16'. UPDATE commands will produce similar results. An update through a view can also trigger this error if the view includes more than 16 tables. When setting up constraints on your tables, look at the maximum number of tables that might be touched by an update, insert, or delete statement before you set up your constraints. When setting up views, be careful not to include more than 16 tables in a view if you plan to update through it. The Identity Column, Definition and Purpose Customers have for some time wanted a way to have an incrementing unique key column in a table. The 10.0 release of SQL Server provides this functionality with a unique key column; the user defines a column as IDENTITY, and it is updated automatically. Characteristics of the IDENTITY Property IDENTITY is a column property, like NULL or NOT NULL. Its characteristics are: · Only positive integers are allowed. The column may not contain zero (0) or NULL, or any fraction. · The column's value starts with 1 and always increments by 1 to the maximum value. · When the maximum value is reached, there is no wraparound; the next attempt to insert raises an error message. · The identity column is not updatable; upon update of a row, the value in its identity column does not change. · When you insert a new row, you need not insert an identity value; the identity value is automatically generated and inserted for you. · If you need unique identity column values, put a unique index on the identity column. Identity column values are automatically unique, unless you insert a value directly. · Gaps may occur upon system crash or shutdown with nowait. See "Preburn Issues" below. · Gaps may also exist between identity column values because of rollback or delete operations; identity values are not reused. In the example below, counter_id is the identity column in the table running_counter: 1> select * from running_counter 2> go counter_id next_number ---------- ----------- 1 2 2 5 (2 rows affected) 1> begin tran 2> select * from running_counter 3> insert running_counter (next_number) values (10) 4> select * from running_counter 5> rollback tran 6> go counter_id next_number ---------- ----------- 1 2 2 5 (2 rows affected) (1 row affected) counter_id next_number ---------- ----------- 1 2 2 5 3 10 (3 rows affected) 1> insert running_counter (next_number) values (10) 2> go (1 row affected) 1> select * from running_counter 2> go counter_id next_number ---------- ----------- 1 2 2 5 4 10 (3 rows affected) Setting Up Identity Columns The identity column of a table is a numeric datatype with the IDENTITY property. Because the numeric datatype allows for digits to the right of the decimal place in its definition, but the identity column does not, you must define the number of decimal digits as zero, as in the following example: create table sales_daily (stor_id char(4) not null, ord_num numeric(10,0)identity, ord_amt money null) The maximum number of digits permitted for the identity column is 38. Be very careful of large identity column values. As shown in the following section, "Preburn Issues", large values can make for very large gaps. The larger your gaps, the sooner you hit your maximum value. Preburn Issues How does SQL Server keep track of the next value generated for the identity column? There is a SQL Server configuration value, set with sp_configure, called IDENTITY BURNING SET FACTOR, that works like this: · Initially, a set of n values is preburned or stored in memory. These values are reserved to be used up by the next n inserts. · Each identity column on SQL Server has its own set of preburned values. · Each insert gets its value out of the preburned set. · When the set is exhausted, the next set of n values is made available for use by the next n inserts. Note: You may wonder why this method was chosen, rather than having SQL Server write the maximum column value to storage and log it for insert. The answer is that the store-and-log method is performance intensive, requiring two extra writes per insert. The identity burning set factor is based on a percentage of the maximum value that can be held in an identity column. The default percentage is 0.05%, or 0.0005; however, sp_configure must contain integers and cannot hold decimal places. Therefore, to turn the factor into an integer for sp_configure, multiply it by 107. To get the default factor: 0.0005 * 10**7 = 5000 sp_configure stores the value 5000. The value for sp_configure identity burning set factor can be between 1 and 9,999,999. To determine the sp_configure value based on the number of values you wish to preburn, use the following formula: preburned_set/(maximum+1)=percentage percentage * 10**7 = identity burning set factor For example, given a table with an identity column containing a maximum of four digits, the highest value it can hold is 9999. If you want to preburn 10 values for that column, use the following formula: 10/(9999+1) = .001 (.01%) .001 * 10**7 = 10000 So you should configure identity burning set factor as follows: sp_configure "identity burning set factor", 10000 reconfigure There are two major implications with this system: · Because it is a value set with sp_configure, the same preburn factor is server wide, applying to all its tables. · Large preburn sets can cause large gaps if there is a system crash. Preburn Factor is Server Wide You may have identity columns on your server with different sizes. To find out, given a set preburn factor, how many values will be burned for a given table, use the formula: preburned set size = (maximum + 1) * (preburning factor * 10**-7) For example, given the default preburning factor of 5000: · A table with an identity column defined as numeric(4,0) will burn (9999 + 1) * (5000 * 10**-7) = 5. After five insertions, a new set of the next five higher values is reserved to be used by the next five insertions. · A table with an identity column defined as numeric(6,0) will burn (999999 + 1) * (5000 * 10**-7) = 500. You can mitigate the impact of this situation somewhat if you are able to standardize the size of your identity columns by setting up a user-defined datatype to use as the datatype for identity columns, for example: sp_addtype sitekey, numeric(6,0), "identity" With this key in place, the example table created above would look like this: create table sales_daily (stor_id char(4) not null, ord_num sitekey, ord_amt money null) You should determine if the needs of your particular site can support this method, and use it if so. Set up the datatype for your identity column in model to propagate it to each new database created. Large Preburn Sets Can Cause Large Gaps If there is a system crash, the larger your preburned set, the larger the gap in your identity values will be, because all unused, preburned numbers kept in memory will be lost. For example, if you define your identity column with the maximum value of numeric(38,0), even with the smallest possible preburn factor of 1, you potentially lose millions of values in a system crash. shutdown with nowait causes these values to be lost in exactly the same way as in a system crash. Use a regular shutdown to avoid loss. Be aware that there are also some circumstances under which dump/load can cause loss of identity values. Object Allocation Manager (OAM) pages store the page numbers of pages used, or allocated but not yet used, for every object. The Backup Server reads directly from the disk; the OAM page stores the maximum reserved identity value, while the maximum used value and the maximum reserved value are being tracked in the memory structure for the table. SQL Server writes the maximum used value back to the OAM page when the memory structure is scavenged or the server is shutdown without nowait. If the maximum reserved value is not the same as the maximum used value at the time of the dump, then the maximum reserved value gets written out and restored on a load, causing a loss of identity values. Sybase is currently looking at ways to correct these issues. SYBASE Technical News will keep you informed of any changes! Problems with SYBLOAD on Solaris Some of our customers are having problems using the Sybase installation utility sybload on Sun Solaris 2.x, when installing products from tape. After entering all the information requested by sybload they get the message "permission denied" when sybload tries to access the tape. Some instructions were left out of the Release Bulletin SYBASE SQL Server Release 10.0 for SunOS Release 5.x (SVR4). This will be remedied. In the meantime, all new SQL Server for Solaris shipments include a fluorescent flyer with the missing instructions. If you are experiencing permission problems accessing the tape on Solaris, add the local machine's host name to the $HOME/.rhosts file of the user running sybload, and also to the /etc./host equivalent file for your site. Note on Intel Pentium Floating Point Bug An AnswerBase TechNote, dated December 7, 1994, "Test for Intel Pentium Floating Point Bug," describes some Transact-SQL commands that can be used to demonstrate the Pentium bug. This test is valid for SQL Server 4.8.x, 4.9.x, and 10.0.x If the described Transact-SQL commands are executed on a 4.2.2 server, they will execute correctly, even if the processor has the bug. The bug can be detected only by using a program which takes advantage of floating point operations. These commands will not replicate the problem on the 4.2.2 SQL Server on Novell because the chip's floating point capability was not available at the time of the 4.2 Server release. The Pentium floating point problem is not an issue in the 4.2.x SQL Server release on the Novell platform. Problems with Backup Server and AIX 3.2.5 Question I have difficulty connecting to Backup Server 10.0.x when running on new RS6000 platforms preloaded with AIX 3.2.5. The symptom is that Backup Server seems to hang. Answer The problem may be a communication time-out. It has been fixed by IBM in PTF U435001, which addresses APARs IX46162 and IX44358: · APAR IX46162: process hang while waiting for sockbuf lock · APAR IX44358: fcntl does not create nonblocking sockets Since September 28, 1994, three rolled up releases of AIX 3.2.5 have been shipped; AIX 3.2.5.2(E1) through AIX 3.2.5.5(E4). If you have any of these releases you may be affected by this IBM bug. Contact IBM for fix PTF U435001 (or the current version IBM recommends). Note Do not confuse the problem described above with an intermittent Backup Server hang. If SQL Server normally connects to Backup Server, but periodically the connection fails, verify that you have the following fixes installed: · IBM APAR IX45257 (PTF U432973) · Sybase EBF 3580 (recommended) Sybase EBF 3580 is included with the 10.0.2 Backup Server. Hazards of Copying Dumps with AIX Utilities The System Administration Guide for SYBASE SQL Server Release 10.0 (dated September 14, 1993) reads, "Dumps to a file or disk can be copied to tape for off-line storage." However, Sybase does not recommend using any UNIX operating system commands except tar, dd, cpio or dumpfs to copy files to tape. Specifically there seems to be a problem with the AIX backup utility, and with Logical Volume Manager (LVM) utilities like cplv and importlv. Using these utilities to copy dump files to tape, or to move data from one logical device to another, may lead to database corruption. Explanation LVM creates a logical device of type "copy" when you invoke cplv, specifying sector 0 as the control block. When you copy your data to the new device, sector 0 on the original device becomes sector 1 on the new device. If you specify a device type unknown to AIX, AIX will initialize sector 1. If you have installed a database on a device that does not have sector 0 initialized as the control block for the device, and use cplv to move data, SQL Server will be confused about the address of the control block, and you will get these errors in the error log: kernel: dconfig: unable to read primary master device kernel: kiconfig: read of config block failed This particular error can be avoided if you do disk init with the vstart option to skip the first sectors, making sure to set size 1 page smaller than normal for every block vstart is from 0: disk init name = "skip_dev", physname = "/dev/something", vdevno = ##, size = ###-1, vstart = 1 The backup utility can also change blocks, rendering your backups unreadable. To avoid backup problems, use tar or dd to copy dump files to tape, and dd to move a dump file from one device to another. Open Client/MF: Undocumented Return Code Open Client/Mainframe (OC/MF) for CICS can return an undocumented return code: _-25 (TDS_GWLIB_FUNCTION_NOT_AVAILABLE)_ If this code is returned, it means that the transaction is requesting a function that is not part of the OC/MF API. The OC/MF API is a subset of generic Open Client; it does not include all Open Client calls. If you get this return code, check your application code for Open Client calls that are not documented in the Open Client/Mainframe Programmer's Guide. It contains a list of all supported functions. PowerBuilder AutoCommit and Open Transactions SQL Server may experience problems because of long-running transactions started by PowerBuilder applications. These problems could include extended recovery times or users waiting on locks. For this reason, it's generally recommended when using PowerBuilder with SQL Server that you allow SQL Server to handle transaction management. PowerBuilder AutoCommit PowerBuilder defines information regarding a connection to a database in a transaction object. PowerBuilder defines a transaction object as: "a non-visual object that identifies a specific database connection within a PowerBuilder application." In other words, a transaction object is a set of attributes defining the connection. An application can instantiate (define) its own transaction object or use the default transaction object, SQLCA. One of the transaction object attributes that causes confusion for Sybase users is AutoCommit. To set the AutoCommit attribute for a transaction object, you must define: transaction_object_name.AutoCommit = TRUE or transaction_object_name.AutoCommit = FALSE The default value for AutoCommit is "FALSE". A value of "FALSE" for AutoCommit means that when the connect statement is used within a PowerBuilder application, PowerBuilder executes a begin\x11transaction upon making the database connection. If AutoCommit has a value of "TRUE", then no transaction is started upon the completion of the connection to the database. The following table shows how AutoCommit affects other PowerBuilder statements: Table 1: Effect of AutoCommit on other PowerBuilder statements Statement Description COMMIT If AutoCommit is "FALSE", PowerBuilder will issue a commit transaction and then a begin transaction. If AutoCommit is "TRUE", an error will occur. DISCONNECT If AutoCommit is "FALSE", PowerBuilder will issue a commit transaction. ROLLBACK If AutoCommit is "FALSE", PowerBuilder will issue a rollback transaction and then a begin transaction. If AutoCommit is "TRUE", an error will occur. Having AutoCommit set to "FALSE" can result in long-running transactions and the possibility of locks being held that a user is unaware of. If you don't clearly understand the ramifications of setting AutoCommit to "FALSE", it's better to set it to "TRUE" and allow your application to use the implicit transaction management provided by SQL Server. Stratus VOS and TCP Question I am running SQL Server on Stratus VOS. I find I cannot connect to TCP, and the following errors appear in the SYBASE errorlog: os_init 0 tcp connections configured. At least 2 are required to run the tcp network Error 1602 Unable to initialize network 9 Answer This error is generally caused by the parameter ctcpconnections being set incorrectly in the SQL Server configuration block. This parameter is only available in Stratus VOS implementations. The first default network type for Stratus VOS SQL Server was a proprietary type; TCP was added later, for greater convenience. Solution You must set ctcpconnections to 2 plus the maximum concurrent TCP connections to the SQL Server, as follows: buildmaster -ddevicename -yctcpconnections=2[+n] where n is your maximum concurrent TCP connections. Warning! buildmaster -y is not documented in the standard documents; please use it only under the direction of Sybase Technical Support. Bug Reports The following sections describe some current bugs, and are followed by the full reports of bugs fixed in the latest EBFs for SQL Server 10.0 and 4.9.2. Bug 63348 - KILL on Multiprocessor Servers The symptoms of bug 63348 are: the SQL Server seems to stop for a while, then suddenly becomes active again; and a task locks up until another task is executed, or checkpoint is run. A degradation in performance may be noticed for the server. The problem is caused by the use of the kill command. The problem will only occur when a thread that is running is killed. This can only happen on a multiprocessor server; there must be more than one engine. This bug affects multiprocessor servers on all platforms. Explanation When kill is used, the internal variable used to track the number of running threads is decremented by one. When kill is used on a running thread, this variable becomes out of sync with the actual number of runnable threads on the queue; the variable's count goes to zero, and the server incorrectly thinks there is nothing to schedule. However, as soon as something else becomes runnable (such as an extra request over the network, or checkpoint running), then the server wakes up again. Rebooting SQL Server clears the problem temporarily. The problem does not occur on single-engine servers, because on a single-engine server, only one thread can be running at once. If that thread is executing the kill, the server cannot be running another thread. This bug has been fixed, and the fix is being tested prior to incorporation in a future Rollup. If you are experiencing this bug, please contact Technical Support to get on the waiting list for the fix. Bug 62149 - Stored Procedure with Variable and COMPUTE Clause Raises Error 411 A stored procedure with an input parameter, a different declared variable inside the procedure, and a select query inside the procedure which selects the declared variable and then has a compute clause on the variable, gets a 411 error. For example: create procedure foo @inarg int /* input parameter */ as declare @localarg int /* declared variable */ select @localarg = 1 /* that selects the decl.var. */ select dbid, name, @localarg from master.dbo.sysdatabases order by 3, 1 /* required in conjunction with compute/* compute sum(@localarg) /* compute clause on the variable */ go Msg 411, Level 16, State 1: Server 'testserver', Procedure 'foo', Line 8: Compute clause #1, aggregate expression #1 is not in the select list. All the listed parts must be present for the bug to appear: · If you remove the (unused) input parameter, the procedure works. · If you compute sum(dbid) rather than sum(variable) the procedure works. This bug is fixed for 4.9.2, and a port is due for 10.0.2. MAP Doc Corrections Please note the following corrections to your Mainframe Access Products (MAP) manuals: OmniSQL Access Module Installation and Administration for DB2-CICS (Doc ID 34245-01-1010-01) In Chapter 7, under "Installation Checklist", step 5 should read "DFHRPL concatenation" instead of "STEPLIB concatenation". OmniSQL Access Module Installation and Administration for DB2-IMS (Doc ID 34369-01-1010-01) In Chapter 6, under "Installation Checklist", step 8 should be put after steps 9-11. EBF News The following tables list the latest Rollup numbers for SQL Server 4.9.2 and 10.x on selected platforms, and selected bugs fixed in them. Latest Rollups for SQL Server 4.9.2 Platform Rollup Number SunOS Release 4.x (BSD) 4152 HP 9000 Series 800 HP-UX 4253 IBM RISC System/6000 AIX 4154 AT&T System 3000 UNIX SVR4 MPRAS 4155 Digital OpenVMS VAX 4156 Sun Solaris 2.x 4157 Digital OpenVMS Alpha 1.5 4158 Digital OpenVMS Alpha 1.0 2121 Bugs Fixed in Latest Rollup, SQL Server 4.9.2 Bug Number Description 63393 The server crashes after the errorlog message "current process infected with 10". The trace information which follows shows that the error occurred in the routine des__hashcheck, and that the Resource->rdesmgr_spin spinlock is currently being held. 63030 Views get corrupted under certain conditions. Selecting from the corrupted view returns wrong values until the view is dropped and re-created. The corrupted view returns same data for all the columns. 62149 See full description on page 20. 61902 539 errors result in orphaned locks. Stack trace shows indexsplit as one of the routines called. 61850 Insert into a table with a timestamp column in a stored procedure causes Error 207(invalid column) for the timestamp column when the procedure gets renormalized. This also causes Error 273 for the timestamp column if you try to insert a NULL value for the timestamp. 61168 Certain aggregate expressions are incorrectly regarded as being identical. As a result, one of the expressions gets thrown away, and the other is used instead. 60980 SQL Server fails to perform a switch from a failed disk device to a good device when disk mirroring is enabled. 60809 Added support for the TCP_NODELAY option, which can be enabled via the 1610 trace flag. Server should be booted with `-T1610' to turn on this option. 60122 [REPSRV] Fix 58464 tries to reduce the restart latency time of the SQL/LTI log scan during a sharp increase in XACT log activity. However, the fix causes a 30 percent performance reduction with asynchronous disk I/O facilities which do `page hiding' (such as IBM RS6000). 60099 When attention is received while executing Remote Procedure Calls on another server, the endpoint is prematurely cleaned up. This prevents the attention from being forwarded to cancel the RPC on the remote server, which is left suspended. 59308 Certain insert into statements with use of union and view may fail with stack trace and the isql connection gets aborted. 58509 A signal 11 segment violation produced a stack trace during a query dealing with views. 58464 If there is a sharp increase in XACT log activity (such as from several large bulk copies) after a short period (5 minutes) of no XACT log activity, the SQL Server may delay the sending of the new XACT log information to the Replication Server for as much as 10 minutes. 58411 "if exists" generates segmentation violation when the where clause contains concatenation of convert with subqueries. This occurs during parse time. 58192 Mirroring a device onto the master device should not be allowed, a warning should be generated. 58047 If a temporary table is dropped and created in one batch, it's not accessible and not dropped even after the isql session is over. 57989 diagserver panics while killing SERVER tasks. 57873 Server returns duplicate rows when query has a group by clause and a having clause which in turn has aggregates and an or clause. 57756 DBCC checkalloc resets the 3604/3605 flags at the end of execution even if the Server was started with traceflags 3604/3605 in the command-line. 57673 [REPSRV] The log scan thread sometimes is not awakened when there are records to be read in the log. 57592 Server shuts down with messages "ubfree: block 0x62a9800 already free", "Stack overflow detected" and stack trace while resolving a huge stored procedure. 57576 LOAD TRAN recovery always aborts a 2-phase XACT in PREP but (wrongly) leaves the XEND log record in PREP. If the XEND log record is seen again by boot time recovery, error 3429 occurs if the 2-phase XACT commits after the LOAD TRAN but before reboot. 57338 Error 407 is raised when issuing select distinct with a view. ("Unable to allocate ancillary table for query optimization....") 57273 Server could reject previously accepted large queries with 703 error ("Memory request failed because more than 64 pages are required to run the query") if the query uses explicit/implicit CONCAT nodes. 57002 When using a group by clause in a select statement, the matching of the select list and the group by list reshuffles the query tree too early and causes an erroneous result in queries whose pattern is of the kind: select col_a, col_a + col_b from some_table where some_selectivity_condition group by col_a, col_a + col_b 56959 select str(value,n,2) statement returns -0.-0 instead of -0.00 when used on datatype float containing value `-0.0'. 56927 load database results in stack trace: lddb_main->lddb_fixdbosuid->update->insert->cinsert-> datasplit->linksplit->ncrid_update->delirow->bufdirty 56652 2805 and 2812 errors with stored procedures containing compute clauses when multiple copies in cache being used. 56635 select substring(@@version,12,2) returns NULL. 56540 sp_lock: On a system with thousands of locks in use, overflowing the temporary buffer used to insert into syslocks also can result in process infection and stack trace. 56430 When performing outer join and qualifying inner table is NULL, returned values for inner table column are NULLs (when they in fact are not). 56393 When commiting the DEALLOC records for text-delete, a small window exists in which the deallocated first text-page could be allocated again to some other object before original transaction completes & locks are released. 56283 Timeslice error occurs while recovering transaction log. 56185 ex_raise from routine sort of worktable with dynamic index results in 1129 on the OAM page of worktable being sorted. Worktables not being handled as temporary tables when they should be. 56127 Range levels are incorrect when a subquery selects from a view. 56064 The detection of KILLYOURSELF status in the lock manager is disabled, until all code paths have been examined for proper handling of lock interrupts. Killing a task waiting on a lock will only succeed if the lock can be granted. 56014 A few long-running XACTs and/or a multitude of shorter XACTs can cause detection delays of up to 10 minutes. 56013 select from table order by... from a table that has text columns holds shared page locks (as if it is doing level 3 locking) and eventually run out of locks with error 1204. 55799 Search density is not used in the case where a search value is in multiple steps or outside the step range, and therefore it should not include those values. 55756 When executing sp_dropmessage for a user who is not the database owner, not creator of the message and does not have the System Security Officer role, gets error saying "unmatching argument list to PRINT command". 55180 Reformatting of work table not done as a result of wrong page and row estimate for a temporary table created in the same batch and used later in a join with a user table in the same batch. 302 trace output shows (rows: 1 pages: 1) 55017 When a truncate table is interrupted due to an attention, the transactions in progress are not cleared. 54952 When lock manager trace 1204 is active, the display of a single chain deadlock generates time-slice errors. 54471 Inserting into a view using a select statement with a group by clause in which no aggregate is specified in the select list, causes the client to be killed and a "process infected with Signal 10" error is reported in the server's error log. 54388 Inserting into a very large table causes 1127 errors when maximum number of OAM page entries is reached. 54367 Unable to reference temporary tables created by a parent stored procedure in the child stored procedure. The data returned from the child stored procedure is corrupted. 53879 Dropping a work table using the full object name (such as tempdb..#tbl) doesn't free the memory associated with the temporary table. If a process creates and drops lots of temporary tables, it may eventually die with a 703 error. 53837 If you create a table with more than 8 columns, insert some values, revoke update on one column of the table, and then try to update the eighth or higher columns of the table, error 230 is raised ("permission denied on column col9 of object foo..."). 53532 When a PC Windows client machine is turned off while OMNI is in the process of a select to DB2 returning a very large result set, OMNI does not detect the loss of the client, disconnect the user, or close the thread to a Net-Gateway. 52733 Executing a procedure which declares a variable @var and assigns a procedure name to it and then tries to exec @var may result in a stack trace. 52174 Add limited conversion functionality between EUCJIS SQL Server and SJIS clients. 51604 The SQL/LTI thread assumes a stored procedure's EXECBEGIN log record is written into the same database where the sproc is stored. This is not true if the procedure executes from within a user XACT started in another database. A 9141 error will result. 50987 select x = null into table returned error msg `Column `x' specifies storage type 31, which does not currently exist.' The column `x' should be created with a default datatype of nullable int. 49514 Different presentation format (exponential) of the output (if big enough) from sp_spaceused. 45987 System stored procedure sp_remove_xact has a parameter @commid defined as a smallint, and it should be defined as an int. 35593 Disk remirror of the master device does not update the cmastmirror configuration value of the master device. If the user had done a buildmaster -r before disk remirror the server would not activate the mirror with the message "dopen : "" is a directory." 33368 An order by of a union of two select statements that contain text columns previously gave a 420 error (Image/Text not allowed in order by), although the Text/Image column was only in the select list. 17738 SERVER error message 4207 should read "Dump transaction is not allowed because a non-logged operation was performed on the database. Dump your database or use dump transaction with truncate_only until you can dump your database." Latest Rollups for SQL Server 10.0.2 Platform Rollup Number SunOS Release 4.x (BSD) 4009 HP 9000 Series 800 HP-UX 4010 IBM RISC System/6000 AIX 4011 AT&T System 3000 UNIX SVR4 MPRAS 4012 Digital OpenVMS VAX 4013 Sun Solaris 2.x 4014 Digital OpenVMS Alpha 1.5 4015 Digital OSF/1 4019 Bugs Fixed in Latest Rollup, SQL Server 10.x Bug Number Description 17668 count(*) is not returning correct results in a view on three tables, three columns updating two columns of another table with the count from the view. Works when done one at a time and using the table directly (select into). 18345 sp_renamedb fails to change the internal dbi_dbname variable. This causes incorrect 3508 messages, among other possible side effects. 19144 When inserting from a table that allows NULL values to a table that is defined as not NULL, convert "sneaks" NULL values in. When you select distinct on the new table, the discrepancy shows up. 25462 Use of a table correlation name within a correlated subquery does not correspond to ANSI 1989 SQL standard page 46; non-use of table alias within an inner select statement does not revert table queried to that used in outer select 25844 "looping 2714s" sample stack: _kisignal sigtramp.o sigtramp.o _s_free _s_pop _s_cleanframe _clean_process _kill_proc __ex_cleanup _kisignal sigtramp.o sigtramp.o _s_free _s_pop _s_cleanframe _clean_process. 28221 Recovery modifications needed to avoid recompleting transactions from the earliest BEGIN RID record in the log to the checkpoint record. 29069 When updating a single column of one row on which there exists a unique nonclustered index, excessive number of pages are locked. Exact scenario using a unique clustered index will lock the correct number of pages. 31730 When the server is stopped, a "%SYSGEN-E-CONFIGERR, CONFIGURE process" error is reported on the operator's console. The operator's console is left in a hung state, not allowing any logins. 32566 When the error message for error 821 is printed, a newline character is missing which makes the user errorlog difficult to read. 33151 During iterative bcp data in sessions, seg. fault occurs with an incomplete stack trace and time slice results in Error 1501. 33398 Cannot use full name of temporary table to drop it in 4.8. Results in 3701 error: create table #CM2746 (col1 char(1)) use tempdb select name from sysobjects where col1 like "#CM2746%" drop table #CM2746____000000100077667955. 33475 Unmirroring permanently should not attempt to set affinity on debug engine. 33726 update tablea set tablea.fielda=tableb.fireldb where dbname.dbo.tablea.fieldc dbname2.dbo.tableb.field3 - loops on writing to syslogs. When tables to be joined are in the same database, all is OK. 33769 When deadlock occurs while inserting a row on an overflow page, an attempt is made to unkeep a buffer twice. 34124 Attempt to drop database raises error 834. There is no error message associated with 834. 35139 Using bcp in on a table with index that has ignore dup key set may cause Errors 1129, 813, 2525, 2521, and data corruption. 35287 The Navigation Server needs to read the distribution pages for tables for help in optimization. Change TDS 5.0 to allow this. 36404 Procedure sybase_manager.com should include foreign symbol definition for `upgrade' which will allow easy execution of the upgrade program for phase 4 of the upgrade process. See Appendix A of the Installation Guide for examples of using the program for recovery purposes 37829 The following SQL statement produces a parse tree that is incorrect. In addition the syntax is illegal in ANSI. "declare @v1 int, @v2 int select @v1 order by @v2". 37934 There is a bug in installasync70. First: the asynchronous driver isn't shipped with 700s (HP-UX 8.07), the customer must get PHKL_1017. Next, the script assumes the driver ends up in /hpux_install - but it usually doesn't after applying patch. 38574 The sp_auditobject (and sp_auditsproc) procedures expect a table or view (or stored procedure/trigger) name as the first argument. We allow the owner.name format, but they provided dbname.owner.name, and the command failed later than it should have. 39246 Customer had a problem with isql on VMS 5.5. /colwidth did not work. This is a reported bug in the documentation (27341). It was supposed to be fixed in 4.8.0.1, but it was not fixed in the 4.9.1 docs. Also /separator should be /colsepa. 39736 startserver problems: an extra # on first line of script created for -m causes shell specification to be ignored, causing script to fail on platforms using csh. m_ is prefixed to runserver file name without regard to whether it is a full path or not. runserver script won't work unless the full path is given or script is found in PATH directory. A previous workaround conflicted with -m. 39801 When command text auditing is disabled, a memory structure isn't being cleared. Thus, the parser thinks we should be saving the text, and we end up segfaulting. 41854 Install sybsyntax db by running ins_syn_sql script, issue sp_syntax get error `server user id is not a valid user in db sybsyntax'. The same problem happens with ins_syn_dblib also. 42132 executing sp_clearstats and sp_reportstats by "sa" before any user logins are created gives error "divided by zero". 42269 shutdown SYB_BACKUP with nowait, while a dump is in progress, causes Backup Server to dump core. 42514 Dump of a database to 8-mm tape without the init option will overwrite what is on the tape. Dump should fail. The init option should be used to overwrite. If you dd a file to the tape, the dump will succeed and overwrite the file. 42901 Error 813 "logical page already hashed" after a bcp, running concurrently with another process, aborts. 43282 Both 491 on solaris21 and sys10beta on sunos412 show error 102 returned when attempting to pass `+3' as int parameter to a stored procedure, although it succeeds in dynamic variables and direct inserts. 43661 Assignments of the "sybase_ts" role do not take effect in the same login session. This means that many diagnostic tools cannot be used unless the server is healthy enough to let someone log in twice, once to assign the role and a second time to use it. 44275 select "column_heading" = column_name or select column_heading = column_name or select column_name as column_heading works but select column_name as "column_heading" doesn't. 44667 shutdown with nowait of a Backup Server while a dump is in progress results in a Backup Server core dump with "assertion failed". 44680 Backup Server can not perform dump/load database when the database device is on a character special device. Backup Server returns Error 4.56.2.1 45492 Customer is getting an inexplicable 224 error when inserting into a table that has both a reference constraint and check constraints. 45723 create table, alter table then add constraint, drop constraint, and add constraint raises error 1923: "..table cannot have more than one primary key...", but there is no primary key. 45837 Identity problem: create table t1 (f1 numeric(2,0) identity) then create view vident as select f1 from t1, insert into vident values() results in stack trace with the message "current process(0x80008) infected with Signal 11 ... The SQL Server is terminating this process." Q5.4: TECHNICAL NEWS Volume 4, Number 2 May, 1995 _________________________________________________________________ Disclaimer: No express or implied warranty is made by SYBASE or its subsidiaries with regard to any recommendations or information presented in SYBASE Technical News. SYBASE and its subsidiaries hereby disclaim any and all such warranties, including without limitation any implied warranty of merchantability of fitness for a particular purpose. In no event will SYBASE or its subsidiaries be liable for damages of any kind resulting from use of any recommendations or information provided herein, including without limitation loss of profits, loss or inaccuracy of data, or indirect special incidental or consequential damages. Each user assumes the entire risk of acting on or utilizing any item herein including the entire cost of all necessary remedies. Staff Principal Editor: Leigh Ann Hussey Contributing Writers: Lance Anderson, Paul Dow, Ken Duffy, Aimee Grimes, Joseph Hui, Greg Klinder, Andrzej Sarapuk, Gerald Soulos, Loretta Vibberts, Rob Weaver, Elton Wildermuth Send comments and suggestions to: SYBASE Technical News, 6475 Christie Avenue, Emeryville, CA 94608 or technews@sybase.com This issue of the SYBASE Technical News contains new information about your SYBASE software. If needed, duplicate this newsletter and distribute it to others in your organization. All issues of the SYBASE Technical News and the Troubleshooting Guides are included on the AnswerBase CD. _IN THIS ISSUE:_ Technical Support News/Features How Technical Support Uses Case Survey Metrics SQL Server HP: Mirror Fail-Over and installasynch80 buildmaster -y and -r Options to Become Obsolete Dump Transaction Fails to Truncate Log How to Read dbcc checkalloc Output Replication Server System Database Recovery min() and max() Functionality Viewing all Decimal Digits in a select from money_column Dump/Load Compatibility, 10.0.1 vs. 10.0.2 Connectivity / Tools / PC keepalive Redux Setting up Borland IDE to Compile Client-Library Macintosh Think-C and ct_init Routine Certification and Fixed Bug Reports SQL Debug Certified for DEC OSF/1 IPX/SPX Certification for Solaris Certification Report for All Platforms Bug 61483 - Duplicate Rows in syskeys How Technical Support Uses Case Survey Metrics Sybase Technical Support has been sending out a Technical Support Case Closure Satisfaction Survey for 16 months (September 1993 through January 1995). In October 1994, based on feedback from customers who had responded to the survey, we modified the survey process such that: Customers receive surveys no more frequently than twice a year. Customers are asked to respond about one completed case instead of multiple cases. Customers who request it continue to receive monthly case status reports. The feedback received is being used in the following manner to drive corrective action and process improvements: Returned surveys that include concerns about the handling and/or status of cases are regularly forwarded to Technical Support managers to ensure follow-up and, if required, corrective action. A team of 10 to 12 Technical Support engineers met monthly for several months to carefully review customer narrative comments on the quality of our support and to document the concerns and suggestions for senior management review and action. Documentation developers met to review customer comments on the quality of our documentation to determine what improvements could be made. Senior managers are using the information to develop plans for support process improvements and are taking ownership for follow-up on these plans. We want to thank all of you who have taken the time to respond to our survey and ask that you continue. We have heard very clearly that electronic delivery of surveys would make it easier for you to continue to respond and we are pursuing that capability. HP: Mirror Fail-Over and installasynch80 Question In testing mirror fail-over on HP, I cannot get the server to switch over when the primary device is powered off. What is the problem? Answer This problem has been tracked to the installasynch80 script. The mknod command for async is in error. The incorrect command included in the script is as follows: /etc/mknod /dev/async c 101 0 The command should read as follows: /etc/mknod /dev/async c 101 0x7 Here is the corrected section in the installasynch80 script, which you will find in your $SYBASE/install directory: # # create_devices creates the async i/o devoce file # create_devices() { echo "" echo " We are about to create the asynchronous raw disk i/o device." echo " If this device already exists, it will be replaced. " # prompt user for continuation # checkanswer echo " Creating asynchronous i/o device....." rm -f /dev/async /dev/asynch[0-9]* /etc/mknod /dev/async c 101 0x7 chmod 0660 /dev/async chown sybase /dev/async chgrp $GROUP /dev/async echo "" echo " Async device has been created. " } buildmaster -y and -r Options to Become Obsolete A new configuration interface for the next generation of SQL Servers after System 10[TM] will render both the -y and -r command line flags to buildmaster obsolete. All configurable options (cdbnum, cmemsize, cfgxdes, and so on) will be placed in a configuration file, which is in a text format. In System 10 and earlier releases, buildmaster -y and -r are used to change the values of options in the config block of SQL Server. They will no longer be needed because the new configuration interface will allow users to modify the values in a flat text config file. Also, the names in the config file will not cdbnum, cmemsize, and the like, but will be "number of databases", "total memory", and so forth, rendering the information more understandable. If the config file cannot be booted, SQL Server will still be able to start under the default settings (formerly the -r option) by not specifying a configuration file. In addition, the dynamically tunable parameters will be changed with the sp_configure command, not when reconfigure is run. Watch your Technical News for more information as it becomes available. Dump Transaction Fails to Truncate Log WARNING! Although the following procedure has been tested rigorously and used at many customer sites, it uses undocumented and unsupported dbcc commands. If you are issuing a dump transaction command and find that the log is not being truncated, it is possible that an open transaction is preventing the log from being cleared. This occurs because dump transaction will only truncate the log up to, but not including, the log page that contains the begin transaction for the oldest active transaction. You can use the following procedure to determine the spid, suid, and time of the open transaction (you must be the "sa" user or have the sa role to use this procedure): 1. Enable dbcc command output to appear on the screen: dbcc traceon(3604) go 2. Print the oldest outstanding checkpoint record. dbcc log(dbid,0,0,0,-1,17,0) go 3. The output from this allows you to find the ID of the oldest active transaction. Output will be similar to the following: _LOG RECORDS:_ CHECKPOINT (262 ,12) attcnt=1 rno=3 op=17 padlen=0 xactid=(262 ,12) len=60 status=0x0000 /* ^^^ length field */ rows=11407892, pages=79 extents=288 status=4096 timestamp=0x0001 0x0000811d active xacts:(262,3) /* oldest active xact ID */ NOTE: This technique is not valid if the length field has a value of 52 or less. In this case, execute a checkpoint and try again. 4. Display the begin transaction record associated with the output active transaction ID. Note that the third and fourth parameter values of the following command come from the active xacts field above: dbcc log(dbid,1,262,3,1,0,1) go Output in the example case looks like this: _LOG RECORDS:_ BEGINXACT (262 , 3) attcnt=1 rno=3 op=0 padlen=3 xactid=(262,3) len=76 status=0x2C20 masterid=(0,0) lastrec=(0,0) xstat=XBEG_ENDXACT >>>>>> spid=5 suid=1 uid=1 masterdbid=0 mastersite=0 endstat=3 ^ name=user_transaction time=Sep 20 1991 1:02PM ^ ^ SPID and SUID of process owning outstanding xact 5. Get the login name that matches the suid: use master go select name from syslogins where suid = suid go Now run sp_who to see if there is an entry that contains both the SPID and login name obtained above. If so, follow up with the user to find out what their transaction is and if they can exit their program or isql session or issue a commit tran. Alternatively, you can immediately kill the process yourself. This should terminate the process and resolve the blocking transaction. Issuing the dump transaction command again should clear the log. If the log is still not being cleared, or if there is not a matching entry in sp_who, then the user that issued the blocking begin tran is no longer logged onto SQL Server but SQL Server is unaware of this. This occurs primarily when a user connecting from a PC client has powered off the PC in the middle of a transaction, has exited a third-party application by pressing ctrl-C, or has encountered network errors on terminating the connection to SQL Server. In this case, if kill did not work, the only recourse is to reboot SQL Server. If a user was not knowingly holding an open transaction, you can examine your SQL Server error log for clues to help you diagnose why the transaction was left open. Messages to look for include "current process infected", "network error was encountered" (Error 1608), "host process disconnected," and stack traces. Sybase Technical Support can help you analyze these errors. If you have a UNIX environment where this problem is occurring frequently, you may want to adjust your keepalive kernel parameter to a lower value. See your SYBASE System Administration Guide Supplement, operating system documentation, or contact your operating system vendor for more information on the keepalive parameter and how to set it. If your environment consists of a number of users utilizing a single login to access SQL Server, Sybase Technical Support can give you trace flags to help track down the user who is causing the problem. NOTE: PowerBuilder users: When keepalive fails to clear a connection, it may have occurred because PowerBuilder had a General Protection Fault and the user simply restarted Windows. This leaves the LAN transport software/firmware holding the connection open to SQL Server. keepalive will not clear the connection, since the client side LAN is still holding the old connection. When the user starts PowerBuilder again, it will get a new connection to SQL Server. You can use sp_who or netstat to see if the old user connection is still active. The only way to clean up the connection is to turn off the PC that opened the connection; this resets the connection and allows keepalive to do the proper cleanup on the SQL Server side. How to Read dbcc checkalloc Output Question When examining the output of a dbcc checkalloc command, I notice that only one page seems to be used per extent on many allocation pages. Why? Here is a sample of the output: [...] Alloc page 16128 (# of extent=1 used pages=1 ref pages=1) Alloc page 16384 (# of extent=1 used pages=1 ref pages=1) Alloc page 16640 (# of extent=1 used pages=1 ref pages=1) [...] Answer An extent is a group of eight pages. Every 256th page in a database is an allocation page, which is used to track the space being used. The above output does not show that one page is used per extent on many allocation units; it shows that one extent is in use and that, on that extent, one page is allocated. That is, exactly one page is allocated out of that group of 256 pages. That page is, of course, the allocation page itself, which is always allocated. The other 255 pages on each such allocation unit are available to whatever object needs the space. Normally an extent is owned by whatever object/index combination first grabs space on it. The first extent on each allocation unit, which controls the allocation page itself, is an exception: the allocation page is always allocated, but the extent is considered free if that page is the only one allocated. Replication Server System Database Recovery Following the recovery procedure outlined in the Replication Server Administration Guide dated October 21, 1993, for Replication Server System Database (RSSD) recovery could possibly put the Log Transfer Manager (LTM) into an infinite loop. Please substitute the following revised sections for the sections that appear in the chapter, "Replication System Recovery" of any Replication Server Administration Guide dated prior to March 15, 1995. Recovering an RSSD from Dumps The procedure you use to recover an RSSD depends on how much RSSD activity there has been since the last RSSD dump. There are four increasingly severe levels of RSSD failure, with corresponding recovery requirements. Use Table 1: Recovering from RSSD failures to locate the RSSD recovery procedure you need. Table 1: Recovering from RSSD failures Activity Since Last Dump Use This Procedure ------------------------ ------------------ No DDL activity "Basic RSSD Recovery Procedure" DDL activity, but no "Subscription Comparison Procedure" new routes or subscriptions were created DDL activity, but no new "Subscription Re-Creation Procedure" subscriptions were created New routes were created "Deintegration/Reintegration Procedure" Basic RSSD Recovery Procedure Use the basic RSSD recovery procedure to restore the RSSD if you have executed no DDL commands since the last RSSD dump. DDL commands in RCL include those for creating, altering, or deleting routes, replication definitions, subscriptions, function strings, functions, function string classes, or error classes. Certain steps in this procedure are also referenced by other RSSD recovery procedures in this article and in the Replication Server Administration Guide. Do not execute any DDL commands until you have completed this recovery procedure. The current Replication Server refers to the one with the RSSD you are recovering. An upstream Replication Server is a Replication Server that has a direct or indirect route to the current Replication Server. A downstream Replication Server is a Replication Server to which the current Replication Server has a direct or indirect route. To perform basic RSSD recovery, follow these steps: 1. Shut down all LTMs that connect to the current Replication Server. 2. Since its RSSD has failed, the current Replication Server is down. If for some reason it is not down, log into it and use the shutdown command to shut it down. Some messages may still be in the Replication Server stable queues. Data in those queues may be lost when you rebuild these queues in later steps. 3. Restore the RSSD by loading the most recent RSSD database dump and all transaction dumps. 4. Restart the Replication Server in standalone mode, using the -M flag. You must start the Replication Server in standalone mode, because the stable queues are now inconsistent with the RSSD state. When the Replication Server starts in standalone mode, reading of the stable queues is not automatically activated. 5. Log into the Replication Server, and get the generation number for the RSSD, using the admin get_generation command: admin get_generation, data_server, rssd_name For example, the Replication Server may return a generation number of 100. 6. In the Replication Server, rebuild the queues with the following command: rebuild queues See "Rebuilding Queues Online" in the Replication Server Administration Guide for a description of this process. 7. Start all LTMs (except the RSSD LTM) that connect to the current Replication Server. Start in recovery mode, using the -M flag. Wait until each LTM logs a message that it is finished with the current log. 8. Check the loss messages in the Replication Server log and in the logs of all the Replication Servers with direct routes from the current Replication Server. If all your routes were active at the time of failure, you probably will not experience any real data loss. However, loss detection may indicate real loss. Real data loss may be detected if the database logs were truncated at the primary databases, so that the rebuild process did not have enough information to recover. If you have real data loss, reload database logs from old dumps. See "Recovering from Truncated Primary Database Logs" in the Replication Server Administration Guide. See "Loss Detection After Rebuilding Stable Queues" in the Replication Server Administration Guide for background and details on loss detection. 9. Shut down the LTMs for all primary databases managed by the current Replication Server. 10. Execute the dbcc settrunc command at the SQL Server for the restored RSSD. Move up the LTM truncation point. > use rssd_name > go > dbcc settrunc('ltm', 'ignore') > go > begin tran commit tran > go 40 > dbcc settrunc('ltm', 'valid') > go Note: The go 40 command moves the SQL Server log onto the next page. 11. Execute the dbcc settrunc command at the SQL Server for the restored RSSD to set the generation number to one higher than the number returned by admin get_generation in step 5. > use rssd_name > go > dbcc settrunc('ltm', 'gen_id', 101) > go Make a record of this generation number and of the current time, so that you can return to this RSSD recovery procedure, if necessary. Or, you can dump the database after setting the generation number. 12. Restart the Replication Server in normal mode. If you performed this procedure as part of the subscription comparison or subscription re-creation procedure, note that the upstream RSI outbound queue may contain transactions bound for the RSSD of the current Replication Server that have already been applied by using rs_subcmp. If this is the case, after starting the Replication Server, the error log may contain warnings referring to duplicate inserts. You can safely ignore these warnings. 13. Restart the LTMs for the RSSD and for user databases in normal mode. If you performed this procedure as part of the subscription comparison or subscription re-creation RSSD recovery procedure, you should expect to see messages regarding RSSD losses being detected in all Replication Servers that have routes from the current Replication Server. Subscription Comparison Procedure Follow this RSSD recovery procedure if you have executed some DDL commands since the last transaction dump, but you have not created any new subscriptions or routes. DDL commands in RCL include those for creating, altering, or deleting routes, replication definitions, subscriptions, function strings, functions, function string classes, or error classes. WARNING! Do not execute any DDL commands until you have completed this recovery procedure. Following this procedure makes the failed RSSD consistent with upstream RSSDs or consistent with the most recent database and transaction dumps (if there is no upstream Replication Server). It then makes downstream RSSDs consistent with the recovered RSSD. The current Replication Server refers to the one with the RSSD you are recovering. An upstream Replication Server is a Replication Server that has a direct or indirect route to the current Replication Server. A downstream Replication Server is a Replication Server to which the current Replication Server has a direct or indirect route. If DDL commands have been executed at the current Replication Server since the last transaction dump, you may have to re-execute them. To restore an RSSD with subscription comparison, follow these steps: 1. To prepare the failed RSSD for recovery, perform steps 1-4 of "Basic RSSD Recovery Procedure" 2. To prepare for recovery the RSSDs for all upstream Replication Servers, execute the admin quiesce_force_rsi command at each upstream Replication Server. + This step ensures that all committed transactions bound for the current Replication Server have been applied before you execute the rs_subcmp program. + Execute this command sequentially, starting with the Replication Server that is furthest upstream from the current Replication Server. + Make sure that RSSD changes have been applied, that is, that the RSSD DSI outbound queues are empty. + The Replication Server that is directly upstream from the current Replication Server cannot be quiesced. 3. To prepare for recovery the RSSDs for all downstream Replication Servers, execute the admin quiesce_force_rsi command at each downstream Replication Server. + This step ensures that all committed transactions bound for the current Replication Server have been applied before you execute the rs_subcmp program. + Execute this command sequentially, starting with Replication Servers that are immediately downstream from the current Replication Server. + Make sure that RSSD changes have been applied, that is, that the RSSD DSI outbound queues are empty. 4. Reconcile the failed RSSD with all upstream RSSDs, using the rs_subcmp program. + First execute rs_subcmp without reconciliation to get an idea of what operations it will perform. When you are ready to reconcile, use the -r flag to reconcile the replicate data with the primary data. + You must execute rs_subcmp as the maintenance user. See "Managing Replication Server Security" in the Replication Server Administration Guide for more information on the maintenance user. + In each instance, specify as the replicate database the failed RSSD. + In each instance, specify as the primary database the RSSD of each upstream Replication Server. + Start with the Replication Server that is furthest upstream, and proceed downstream for all other Replication Servers with routes (direct or indirect) to the current Replication Server. + Reconcile each of the following RSSD system tables: rs_classes, rs_columns, rs_databases, rs_erroractions, rs_functions, rs_funcstrings, rs_objects, and rs_systext. + When you execute rs_subcmp on replicated RSSD tables, the where and order by clauses of the select statement must select all rows to be replicated. See ``Using rs_subcmp on Replicated RSSD System Tables'' on page 16 for more information. The failed RSSD should now be recovered. 5. Reconcile all downstream RSSDs with the RSSD for the current Replication Server, which was recovered in the previous step, using the rs_subcmp program. + First execute rs_subcmp without reconciliation to get an idea of what operations it will perform. When you are ready to reconcile, use the -r flag to reconcile the replicate data with the primary data. + You must execute rs_subcmp as the maintenance user. See "Managing Replication Server Security" in the Replication Server Administration Guide for more information on the maintenance user. + In each instance, specify the recovered RSSD as the primary database. + In each instance, specify the RSSD of each downstream Replication Server as the replicate database. + Start with the Replication Servers that are immediately downstream. Then proceed downstream for all other Replication Servers with routes (direct or indirect) from the current Replication Server. + Reconcile each of the following RSSD system tables: rs_classes, rs_columns, rs_databases, rs_erroractions, rs_functions, rs_funcstrings, rs_objects, and rs_systext. + When you execute rs_subcmp on replicated RSSD tables, the where and order by clauses of the select statement must select all rows to be replicated. See ``Using rs_subcmp on Replicated RSSD System Tables'' on page 16 for more information. All downstream RSSDs should now be fully recovered. 6. If the recovering Replication Server is an ID Server, you must restore the Replication Server and database IDs in its RSSD. + For every Replication Server, check the rs_databases and rs_sites system tables for their IDs. + Insert the appropriate rows in the recovering RSSDs rs_idnames system table if they are missing. + Delete from the recovering RSSDs rs_idnames system table any IDs of databases or Replication Servers that are no longer part of the replication system. + To ensure that the rs_ids system table is consistent, execute the following stored procedure in the RSSD of the current Replication Server: rs_mk_rsids_consistent 7. If the recovering Replication Server is not an ID Server, and a database connection was created at the recovering Replication Server after the last transaction dump, delete the row corresponding to that database connection from the rs_idnames system table in the ID Server's _RSSD._ 8. Perform steps 5-13 of "Basic RSSD Recovery Procedure". 9. To complete RSSD recovery, re-execute any DDL commands that have been executed at the current Replication Server since the last transaction dump. Using rs_subcmp on Replicated RSSD System Tables When you execute rs_subcmp on replicated RSSD tables, for the subscription comparison and subscription re-creation RSSD recovery procedures, for each system table, the where and order by clauses of the select statement must be formulated to select all rows that must be replicated. Table 2 illustrates the general form of these select statements. Table 2: select statements for rs_subcmp procedure RSSD Table Name select Statement --------------- ---------------- rs_classes select * from rs_classes where prsid in sub_select order by primary_keys rs_columns select * from rs_columns where prsid in sub_select and rowtype = 1 order by primary_keys rs_databases select * from rs_databases where prsid in sub_select and rowtype = 1 order by primary_keys rs_erroractions select * from rs_erroractions where prsid in sub_select order by primary_keys rs_funcstrings select * from rs_funcstrings where prsid in sub_select and rowtype = 1 order by primary_keys rs_functions select * from rs_functions where prsid in sub_select and rowtype = 1 order by primary_keys rs_objects select * from rs_objects where prsid in sub_select and rowtype = 1 order by primary_keys rs_systext select * from rs_systext where prsid in sub_select and texttype in ('O', 'S') order by primary_keys In the select statements in Table 2, sub_select represents the following statement, which selects all site IDs that are the source Replication Servers for the current Replication Server: (select source_rsid from rs_routes where (through_rsid = PRS_site_ID or through_rsid = RRS_site_ID) and dest_rsid = RRS_site_ID) where PRS_site_ID is the site ID of the Replication Server managing the primary RSSD, and RRS_site_ID is the site ID of the Replication Server managing the replicate RSSD for the rs_subcmp operation. For the rs_columns, rs_databases, rs_funcstrings, rs_functions, and rs_objects system tables, if rowtype = 1, then the row is a replicated row. Only replicated rows need be compared using rs_subcmp. For each system table, the primary_keys are its unique indexes. Classes and System Tables The default function string class rs_sqlserver_function_class and the default error class rs_sqlserver_error_class do not initially have a designated primary site, that is, their site ID = 0. If the recovering Replication Server was made primary for a function string class or error class since the last transaction dump, the rs_subcmp procedure described earlier in this section would find orphaned rows in downstream RSSDs. In that event, run rs_subcmp again on the rs_classes, rs_erroractions, rs_funcstrings, and rs_systext system tables. Set prsid = 0, in order to repopulate these tables with the necessary default settings. For example, use the following select statement for the rs_classes table: select * from rs_classes where prsid = 0 order by primary_keys Example Suppose you have the following Replication Server sites in your replication system, where an arrow () indicates a route. Site B is the failed site, and there are no indirect routes. A-->B C-->B C-->D B-->E The preceeding Replication Server sites have the following site IDs: A = 1 B = 2 C = 3 D = 4 E = 5 In this example, to bring the RSSDs to a consistent state, you would perform the following tasks, in the order presented, on the rs_classes, rs_erroractions, rs_funcstrings, and rs_systext system tables. Reconciling with Upstream RSSDs 1. Run rs_subcmp against the above tables, specifying site B as the replicate and site A as the primary, with prsid = 1 in the where clauses. For example, the select statement for rs_columns should look like the following: select * from rs_columns where prsid in (select source_rsid from rs_routes where (through_rsid = 1 or through_rsid = 2) and dest_rsid = 2) and rowtype = 1 order by objid, colname 2. Run rs_subcmp against the above tables, specifying site B as the replicate and site C as the primary, with prsid = 3 in the where clauses. For example, the select statement for rs_columns should look like the following: select * from rs_columns where prsid in (select source_rsid from rs_routes where (through_rsid = 3 or through_rsid = 2) and dest_rsid = 2) and rowtype = 1 order by objid, colname Reconciling Downstream RSSDs 1. Run rs_subcmp against the above tables, specifying site B as the primary and site D as the replicate, with prsid = 2 in the where clauses. For example, the select statement for rs_columns should look like the following: select * from rs_columns where prsid in (select source_rsid from rs_routes where (through_rsid = 2 or through_rsid = 4) and dest_rsid = 4) and rowtype = 1 order by objid, colname 2. Run rs_subcmp against the above tables, specifying site B as the primary and site E as the replicate, with prsid = 2 in the where clauses. For example, the select statement for rs_columns should look like the following: select * from rs_columns where prsid in (select source_rsid from rs_routes where (through_rsid = 2 or through_rsid = 5) and dest_rsid = 5) and rowtype = 1 order by objid, colname See the Replication Server Commands Reference for more information about the rs_subcmp program and the RSSD system tables. Subscription Re-Creation Procedure Follow this RSSD recovery procedure if you have created new subscriptions since the last transaction dump, and if you have executed some other DDL commands, but you have not created any new routes. DDL commands in RCL include those for creating, altering, or deleting routes, replication definitions, subscriptions, function strings, functions, function string classes, or error classes. WARNING! Do not execute any DDL commands until you have completed this recovery procedure. As with the subscription comparison RSSD recovery procedure, following this procedure makes the failed RSSD consistent with upstream RSSDs, or consistent with the most recent database and transaction dumps (if there is no upstream Replication Server). It then makes downstream RSSDs consistent with the recovered RSSD. In this procedure, however, you also either delete or re-create subscriptions that are in a limbo state due to the loss of the RSSD. The current Replication Server refers to the one with the RSSD you are recovering. An upstream Replication Server is a Replication Server that has a direct or indirect route to the current Replication Server. A downstream Replication Server is a Replication Server to which the current Replication Server has a direct or indirect route. If DDL commands have been executed at the current Replication Server since the last transaction dump, you may have to re-execute them. To restore an RSSD that requires that lost subscriptions be re-created, follow these steps: 1. To prepare the failed RSSD for recovery, perform steps 1-4 of "Basic RSSD Recovery Procedure" 2. To prepare for recovery the RSSDs for all upstream and downstream Replication Servers, perform steps 2-3 of "Subscription Comparison Procedure" 3. Shut down all upstream and downstream Replication Servers affected by the previous step. Use the shutdown command. 4. Restart all upstream and downstream Replication Servers in standalone mode, using the -M flag. All LTMs connecting to these Replication Servers shut down automatically when you restart the Replication Servers in standalone mode. 5. To reconcile the failed RSSD with all upstream RSSDs, perform step 4 of "Subscription Comparison Procedure". The failed RSSD should now be recovered. 6. To reconcile all downstream RSSDs with the RSSD for the current Replication Server, perform step 5 of "Subscription Comparison Procedure". 7. If the recovering Replication Server is an ID Server, to restore the IDs in its RSSD, perform step 6 of "Subscription Comparison Procedure". 8. If the recovering Replication Server is not an ID Server, and a database connection was created at the recovering Replication Server after the last transaction dump, perform step 7 of "Subscription Comparison Procedure". 9. Query the rs_subscriptions system table for the current Replication Server. * Also query all Replication Servers with subscriptions to primary data managed by the current Replication Server, or with primary data to which the current Replication Server has subscriptions. * You can query the rs_subscriptions system table by using the rs_helpsub stored procedure. 10. For each user subscription in the rs_subscriptions system table, execute the check subscription command. * Execute this command at the current Replication Server and at all Replication Servers with subscriptions to primary data managed by the current Replication Server, or with primary data to which the current Replication Server has subscriptions. * Subscriptions with a status other than VALID must be deleted or re-created, as described below. 11. For each Replication Server that has a non-VALID subscription with the current Replication Server as the primary: * Note its subid, and delete the appropriate row from the primary rs_subscriptions system table. * Use the subid from rs_subscriptions to find the corresponding rows in the rs_rules system table, and also delete those rows. For each system table, rs_subscriptions and rs_rules: * If a subscription is in the primary table and not in the replicate table (because it was dropped), delete the subscription row from the primary table. * If a subscription is in the replicate table and not in the primary table, delete the subscription row from the replicate table. After completing the rest of this procedure, re-create the subscription, as described in steps 17-19. * If a subscription is in both the primary and replicate tables, but not VALID at one of the sites, delete the rows from both tables. After completing the rest of this procedure, re-create the subscription, as described in steps 17-19. 12. For each primary Replication Server for which the current Replication Server has a non-VALID user subscription: * Note its subid, and delete the appropriate row from the primary rs_subscriptions system table. * Use the subid from rs_subscriptions to find the corresponding rows in the rs_rules system table, and also delete those rows. For each system table, rs_subscriptions and rs_rules: * If a subscription is in the primary table and not in the replicate table, delete the subscription row from the primary table. After completing the rest of this procedure, re-create the subscription, as described in steps 17-19. * If a subscription is in the replicate table and not in the primary table (because it was dropped), delete the subscription row from the replicate table. * If a subscription is in both the primary and replicate tables, but not VALID at one of the sites, delete the rows from both tables. After completing the rest of this procedure, re-create the subscription, as described in steps 17-19. 13. Execute the sysadmin drop_queue command at both the primary and replicate Replication Server, for all existing materialization queues for subscriptions deleted in steps 17-19. 14. Restart in normal mode all Replication Servers, and their LTMs, that had subscriptions to primary data managed by the current Replication Server, or with primary data to which the current Replication Server had subscriptions. 15. Perform steps 5-13 of "Basic RSSD Recovery Procedure" 16. Re-execute any DDL commands executed at the current Replication Server since the last transaction dump. 17. Enable autocorrection for each replication definition. 18. Re-create the missing subscriptions using the bulk materialization method. Use the define subscription, activate subscription, validate subscription, and check subscription commands for bulk materialization. 19. For each re-created subscription, restore consistency between the primary and replicate data in either of two ways: * Drop a subscription using the drop subscription command and the with purge option. Then re-create the subscription. * Use the rs_subcmp program with the -r flag to reconcile replicate and primary subscription data. See the Replication Server Commands Reference for more information about the rs_subcmp program and the RSSD system tables. Deintegration and Reintegration Procedure If you created routes since the last time the RSSD was dumped, you are required to perform the following tasks before you can finish recovering the RSSD: 1. Remove the current Replication Server from the replication system. See "Modifying a Replication System," in the Replication Server Administration Guide for details. 2. Reinstall the Replication Server. See the Replication Server Installation Guide for complete information about reinstalling Replication Server. 3. Re-create Replication Server routes and subscriptions. See The Replication Server Administration Guide Chapter 6, "Subscribing to Replicated Tables," and Chapter 7, "Managing Routes," for details. See the Replication Server Commands Reference for more information about the rs_subcmp program and the system tables. min and max Functionality Question How can I find min or max of two int or float columns for all rows in a table? Is it possible to do min or max on two columns at once? Answer It is entirely possible to do min or max on two columns at once. Given a table that was created as follows: create table MINMAX( a int, b int) you can use the following sequence of Transact-SQL commands to get the results you want: 1> select a,b, 2> "max" = (a+b)/2.0 + abs((a-b)/2.0), 3> "min" = (a+b)/2.0 - abs((a-b)/2.0) 4> from MINMAX 5> go a b max min ----------- ----------- -------------------- ------------------- 10 5 10.000000 5.000000 5 9 9.000000 5.000000 3 -11 3.000000 -11.000000 -7 -99 -7.000000 -99.000000 (4 rows affected) You may also add convert(int....) if necessary. Viewing all Decimal Digits in a select from money_column Question How do I convert a money field so that an entire entry, with all four decimal digits, is visible? Answer The following example shows one way to display all the digits in a single string (preserving leading zeros in decimals), given a table with one money column loaded as follows: 1> insert into lvmoney2 values ($12345678901.0087) 2> insert into lvmoney2 values ($11170305027104.1195) 3> go Structure your query like this: select str(floor(m1),15,0) + convert (char(4), replicate("0", 4 - datalength(convert(varchar(4), convert(int, (10000*(m1-floor(m1))) ) ))) + convert(varchar(4), (convert(int, (10000*(m1-floor(m1))) ) ))) from lvmoney2 The output will look like this: _________________________________________________________________ 123456789010087 111703050271041195 (2 rows affected) For further information, see "Float and Money Display Methods" in Volume 3, Number 3 of the SYBASE Technical News (available in AnswerBase when you do a full-text query of the form ). Dump/Load Compatibility, 10.0.1 vs. 10.0.2 Question Can a backup done by a release 10.0.1 Backup Server be loaded onto a release 10.0.2 SQL Server? Answer There are really two issues here, but the short answer is, yes, 10.0.1 dumps are compatible with 10.0.2. The two issues are: · Can you to load a 10.x dump to a 10.0.2 SQL Server? · Can you use different versions of Backup Server and SQL Server? Loading Dumps from 10.0 or 10.0.1 P1 SQL Server to 10.0.1 P2 or 10.0.2 If you want to load a dump from a 10.0 SQL Server or from an earlier release 10.0.1 SQL Server to a 10.0.1 P2 or 10.0.2 SQL Server, you must execute a script to update all system tables and user databases contained in the load. This script is necessary because of changes to the sysreferences table for release 10.0.1 P2 or later. NOTE: Do not use the script if you are loading a dump from 10.0.1 P2 to 10.0.2. 10.0.1 P2 is completely compatible with 10.0.2, and no additional steps are needed. Use an editor like vi to edit the upgradesysrefperdb.sql script, which is located in the $SYBASE/upgrade directory. Insert the following lines at the beginning of the script: use database_name go where database_name is the name of the database to be updated. After you have completed the load, execute the upgradesysrefperdb.sql script as System Administrator: % isql -Usa -Ppassword -Sserver_name < $SYBASE/upgrade/upgradesysrefperdb.sql where sa is the name of a System Administrator account, password is the password for that account, and server_name is the name of the 10.0.1/P2 or later SQL Server where you loaded the database. Repeat steps 1 and 2 for every 10.0 or earlier 10.0.1 database loaded on a 10.0.1/P2 or later SQL Server, including the system databases. Back up all of your databases. Backup Server vs. SQL Server Versions If you use a 10.0 Backup Server with a 10.0.1 SQL Server, you will get this error message: Open Server Session Fatal Error: 16227.15.0: Unknown token TDS stream received by spid <spid #> If you are using the Backup Server that is compatible with your 10.0.2 SQL Server, you should be able to load dumps from 10.0 and 10.0.1. Be sure to use the compatible Backup Server with your SQL Server: 10.0 with 10.0; 10.0.1 with 10.0.1; 10.0.2 with 10.0.2. An exception to this is the beta SQL Server and Backup Server release. Changes occurred between the beta and production releases that render dumps made by a beta Backup Server unreadable to a production Backup Server. See "Error 16227.15.0" in Chapter 3, "Backup Server", of the SYBASE SQL Server Troubleshooting Guide dated February 14, 1995 for more details. keepalive Redux SQL Server users who have started networking with PC clients often question the issue of "ghost" connections and keepalive timing that can cause a server to run out of connections rapidly. The problem is that the PC user is used to running the PC in standalone mode. If the user submits a query to SQL Server which takes too long to finish, or has made a mistake, the user is likely to reboot the PC, or close the application with a Ctrl-Alt-Del. The aborted process hangs around in SQL Server and finishes after about four hours. While this may seem like a db_cancel problem-that is, that the query does not get cancelled when the PC goes down-it is actually due to incomplete coding in the application combined with the value to which the keepalive timeout is set on the SQL Server host machine. Ideally, an application allows the PC user to cancel the query and does a db_cancel when the user chooses this option. Due to aspects both of application design and of Windows itself, this is not always possible. Network connections are always peer to peer, meaning that both sides know that the other side could go away at any time. However, because of network traffic and processing considerations, it may take some time for a machine to respond. This is where keepalive comes in. The keepalive timer controls how long the machine will wait for a connected machine to respond; the keepalive interval controls how long the host will wait between pings to the remote machine; the keepalive count controls how many times the host will ping the remote machine before it gives up and clears the connection. The problem in the PC-to-UNIX world is that the PC expects everything to behave as though it is local, while UNIX expects WAN-type connections. What this means is that a PC generally has keepalive hard coded at about one minute. A UNIX machine has a keepalive timer whose default setting makes more sense in a world of WAN and low-speed connections-somewhere around two hours. To close the connections more quickly, you must decrease the UNIX keepalive timer to match that of the PC more closely. Depending on network traffic constraints, one to five minutes should be adequate. See your System Administration Guide Supplement or operating system documentation to find out how to set keepalive. Setting Up Borland IDE to Compile Client-Library To compile Open Client Client-Library[TM] applications under Borland C++ for Windows, you can either modify the Sybase-supplied sample makefile, borland.mak, for your program or start a new Interactive Development Environment (IDE) project. This article explains how to do the latter. 1. Open a new project for the program: for example, choose Project-->New Project from the menu bar. Browse to where the program is located (for example, $SYBASE\sample\ctlib), and enter the target name (for example, rpc). For our sample programs, change the Target Type to EasyWin[.exe]. 2. Make sure that the Class Library and BWCC checkboxes are not selected, and choose OK. 3. Choose Options-->Project. Select Directories, and make sure the following directories are listed: Include: %BORLAND%\include; %SYBASE%\include; %SYBASE%\sample\ctlib Library: %BORLAND%\lib; %SYBASE%\lib Where %BORLAND% is the Borland compiler home directory and %SYBASE% is the Sybase home directory (this can be set up as default for the compiler). 4. Highlight the topic "Compiler/Defines". In the "Defines" window, you must have the following: _WIN3;CS_FORCE_PROTOTYPES_ These can be included in your header files as well. CS_FORCE_PROTOTYPES causes prototype functions to be generated, eliminating warning messages about functions being used without prototypes. 5. Select Linker-->General and make sure that the Case sensitive link checkbox is selected. 6. Choose OK to accept all of the changes to the project. It is a good idea to select Options-->Save and save the new project at this point. 7. Choose View-->Project and add the following libraries: wcomnlib.lib wintllib.lib wcslib.lib wctlib.lib NOTE: You will also need to add wblklib.lib if you are using bulk copy routines. For the example programs, you will also need to add exutils.c to the project. Macintosh Think-C and ct_init Routine If you are running the Open Client/C Developer's Kit for Macintosh and are having trouble with the ct_init routine failing, even though your LANG variable is set correctly and ctlib.loc is available, it may be because the default project size for Think-C is too small. The default value is 396; try setting its value to at least 4000 and compiling your application again. Certification Reports The following certification reports have been Passed along to Sybase Technical News by the appropriate Sybase engineering groups. SQL Debug Certified for DEC OSF/1 SQL Debug® has been certified to work with SQL Server release 10.0.2 for DEC OSF/1. This corrects an existing problem between SQL Debug and DEC OSF/1 SQL Server release 10.0.1 or earlier where a control-of-flow language statement, such as if...else or while, causes SQL Debug to hang. IPX/SPX Certification for Solaris As of the 10.0.2 release of SQL Server on Solaris 2.x, IPX/SPX is certified and supported. You need to apply Rollup 4222 for this combination to be certified; you may order the Rollup from Technical Support in the usual way. IPX/SPX is certified at the current release level of Solaris 2.3. Certification Report for All Platforms OPERATING SYSTEM PLATFORM VERSION SERVER DATE NOTES --------------- ----------------------- ------ ----- -------------------- AT&T (NCR) QIC tape device 10.01 08/94 Passed. Tape device is supported on all OS levels currently supported for 10.0x. OS patch required: PSCSI version 94.04.25.19 AT&T (NCR) SVR4 2.00.02 (SMP) 4.9.1 04/93 Passed. certified with WIN-TCP/IP 2.00.04. Requires ASYNCH I/O patch P2ASYNC. AT&T (NCR) SVR4 2.00.02 (UP model) 4.9.1 04/93 Passed. Certified with WIN-TCP: (i386) 02.00.04.02 AT&T (NCR) SVR4 2.01 10.0 02/94 N/A, No plans to certify. AT&T (NCR) SVR4 2.01 4.9.1 04/93 Passed. Certified with WIN-TCP 2.00.05. AT&T (NCR) SVR4 2.01.01 + Pentium 4.9.1 07/93 Passed. Certified with WIN-TCP 2.01.00.12 AT&T (NCR) SVR4 2.02 10.0 02/94 Passed, certified with WIN-TCP 2.01.01.08 AT&T (NCR) SVR4 2.02 4.9.2 12/93 Passed, certified with WIN-TCP 2.01.01.08 and Rollup 2025 (built on OS 2.00.01 with patched libdbt). AT&T (NCR) SVR4 2.02 + pentium 10.0 02/94 Passed. AT&T (NCR) SVR4 2.03 10.01 08/94 Passed. Certified w/WIN-TCP 2.02.00.07 OS patch PSCSI203 is required for any customer planning to use the HP DAT 4mm tape drive for backupserver. AT&T (NCR) SVR4 2.03 4.9.2 09/94 Passed. Rollup 3165 is built with a new libdbt.a so that SQL Server would work with AT&T UNIX 2.02 and later releases without problems and make use of improved system calls. AT&T (NCR) SVR4 2.03 + EISA bus 10.01 11/94 Passed. AT&T (NCR) SVR4 2.03 + EISA bus 4.9.2 12/94 Passed. AT&T (NCR) SVR4 2.03 + SPX/IPX 10.0.2 01/95 Passed. Bugs entered: sybinit(65591), doc(65604), connectivity(65807,65 817). PC Windows Net-Library EBF #4316 is required for connectivity bug #65807. PC Windows DB-Library (4.2) based clients can connect to the Unix Server. AT&T (NCR) SVR4 2.03.01 10.01 12/94 Passed, A series of patches are needed to support EISA and MCA systems. AT&T (NCR) SVR4 2.03.01 4.9.2 12/94 Passed, A series of patches are needed to support EISA and MCA systems. DEC Alpha AXP OpenVMS 1.5 4.9.2 Supported. This was handled as a maintenance rollup release. All customers will be shipped the EBF rollup which supports AXP VMS 1.5. DEC Alpha AXP OpenVMS 1.5 + UCX 4.9.2 N/A, No plans to certify. DEC Alpha AXP OpenVMS 1.5+UCX 3.1 10.01 09/94 Passed. UCX (ECO 2 or higher) is required. SQL Server must be configured for double the bytlm. In addition, if user clients are put into a MUTEX state we recommend doubling the bytlm for the number of users. DEC Alpha AXP OpenVMS 6.1 10.01 12/94 Passed. Network packages certified: DECnet Phase IV, UCX 3.2 and Multinet 3.3 DEC Alpha AXP OpenVMS 6.1 4.9.2 01/95 Passed. Network packages certified: DECnet Phase IV, UCX 3.2 and Multinet 3.3 DEC Alpha OSF/1 1.3 10.00 11/93 Passed. DEC Alpha OSF/1 2.0 10.01 09/94 Passed. DEC Alpha OSF/1 2.0b 10.01 09/94 Passed. The "b" in the OS version indicates that the OS is for the new DEC 2100 sable machines. DEC Alpha OSF/1 2.1 10.01 11/94 Supported. DEC Alpha OSF/1 3.0 10.0.2 01/95 A new product release is required to support OSF/1 3.0. It will be only sent to those customers who request it via the new PID issued as 19555. Contact Sybase Customer Service. DEC RISC Ultrix 4.3 4.2 11/92 Passed. DEC RISC Ultrix 4.3a 4.2 06/94 Passed. Requires ebf# 2005. DEC RISC Ultrix 4.4 4.2 10/94 Certified. EBF 2005 or higher required. DEC VAX OpenVMS 5.4 + UCX 3.1 10.01 08/94 Passed. To run UCX, SQL Server must be configured via the VMS AUTHORIZE utility for double the bytlm stated in the SAG Supplement. In addition the INET device emulator must be loaded before starting SQL Server. DEC VAX OpenVMS 5.4 + Wollongon 10.01 06/94 Passed. Known problem: The SQL Server runs out of AST quota. This problem has been reported against Wollongong. DEC VAX OpenVMS 5.5-2 10.0 02/94 Passed. DEC VAX OpenVMS 5.5-2 4.9.1 Supported. DEC VAX OpenVMS 6.0 10.0 04/94 Passed. As noted in the Release Bulletin customers must install System10 product O/S version 5.4 before upgrading to OpenVMS 6.0. VMSINSTALL is not supported at this OS level. DEC VAX OpenVMS 6.1 10.01 01/95 Passed. Network packages certified: DECnet Phase IV, UCX 3.2 and Multinet 3.2 DEC VAX OpenVMS 6.1 4.9.2 Passed, Network packages certified: DECnet Phase IV, UCX 3.2 and Multinet 3.2 Data General DGUX 5.4 4.2 Passed, This does not include DGUX 5.41 or 5.42 Data General DGUX 5.4.2 4.2 10/92 Failed. We recommend upgrade to 4.9 after upgrading O/S to DGUX 5.4.2 Data General DGUX 5.4.2 4.9 Passed. Data General DGUX 5.4.2.01 10.0 Passed. Data General DGUX 5.4R2 4.9.0S Passed. Data General DGUX5.4R2.1 10.0.1 Passed. Data General DGUX5.4R2.1 10.0sq Passed. Data General DGUX5.4R3.10 10.0.1 11/94 Passed. Users with 4-mm and 8-mm tapes must have Rollup 3963 to run this combination. For 8-mm you also need DGUX patch NSQA-18785-0. See technical details in the Release Bulletin for Rollup 3963. HP300 HP-UX 8.0 4.0.1 10/92 Passed. HP700 HP-UX 8.01 4.2 Passed. HP700 HP-UX 8.05 4.2 Passed. HP700 HP-UX 8.05 4.9.1 Passed. HP700 HP-UX 8.07 4.2 09/92 Passed. HP700 HP-UX 8.07 4.9 Passed. HP700 HP-UX 8.07 4.9.1 Passed. HP700 HP-UX 9.0 4.9.1 Failed. HP700 HP-UX 9.0.1 10.0 02/94 N/A, no plans to certify. HP700 HP-UX 9.0.1 4.9.1 03/93 Passed, ASYNC I/O patch PHKL_2162 is required. HP700 HP-UX 9.03 10.01 03/94 Passed. HP700 HP-UX 9.03 4.9.2 03/94 Passed. Requires HP700 Asynchronous I/O patch# PHKL_3660 HP700 HP-UX 9.05 10.0.1 12/94 Passed, Requires patch PHKL_4334 and patch PHKL_4269 - Bug 64656 has been entered for 'installasync70' script incompatibility. HP700 HP-UX 9.05 4.9.2 12/94 Passed, Requires patch PHKL_4334 and patch PHKL_4269 - Sybase Bugid 64656 has been entered for 'installasync70' script incompatibility. HP800 HP-UX 8.0 4.2 Passed. HP800 HP-UX 8.0 4.9 Passed. HP800 HP-UX 8.02 4.9 Passed. HP800 HP-UX 8.02 4.9.1 Passed. HP800 HP-UX 8.06 4.9 Passed. HP800 HP-UX 8.08 BLS (Secure) 4.9.1 N/A, No plans for certification because BLS does not support asychronous I/O. HP800 HP-UX 8mm tape device 10.0 10/94 Passed. HP800 HP-UX 9.0 4.9.1 Passed. HP800 HP-UX 9.0 using LVM 4.9.1 05/93 Passed. Bug #41610 reported - 'disk init' will only initialize Logical Volume devices <= 2GB. HP800 HP-UX 9.04 10.0 03/94 Passed. HP800 HP-UX 9.04 4.9.2 Passed. HP800 HP-UX 9.04 using LVM 10.01 10/94 Passed. Requires Asynch I/O patch PHKL_3624 & LVM Sybase Mirroring patch PHKL_4418 HP800 HP-UX 9.04 using LVM 4.9.2 10/94 Passed, Requires Asynch I/O patch PHKL_3624 & LVM Sybase Mirroring patch PHKL_4418 IBM RS6000 4mm tape device 10.01 11/94 Passed. 4mm 2GB (7206-001) has full Backup Server feature support. 4mm 4GB (7206-005) supports all Backup Server features except the ability to write multiple file/volume dumps to a single tape. Bug #63636 has been entered to track this product enhancement. IBM RS6000 AIX 3.2 4.2 Passed. IBM RS6000 AIX 3.2.1 4.9.1 N/A, No plans for certification IBM RS6000 AIX 3.2.2 4.9.1 N/A, No plans for certification IBM RS6000 AIX 3.2.3e 4.9.1 05/93 Passed. IBM PTF U418109 is required for AIX 3.2.3e IBM RS6000 AIX 3.2.4 4.9.2 08/93 Passed. IBM RS6000 AIX 3.2.5 10.0 02/94 Passed. IBM PTFs (Patches) are required for the following IBM APARs: IX45257, IX41600, IX38605, IX43714 IBM RS6000 AIX 3.2.5 4.9.2 12/93 Passed. IBM PTFs (Patches) are required for the following IBM APARs: IX45257, IX41600, IX38605, IX43714 IBM RS6000 AIX 3.2.5 + PowerPC Chi 4.9.2 12/93 Passed. IBM PTFs (Patches) are required for the following IBM APARs: IX45257, IX41600, IX38605, IX43714 IBM RS6000 AIX 3.2.5 - SPX/IPX 10.02 01/95 Passed. SQL Server EBF 4283 is required. Bugs entered: sybinit(65590), doc(65603), connectivity(65807,65 817). PC Windows Net-Library EBF #4316 is required for connectivity bug #65807. PC Windows DB-Library (4.2) based clients can connect to the UNIX SQL Server. IBM RS6000 AIX 3.2.X SP/1 SP/2 10.01 Supported. IBM RS6000 AIX 3.2.X SP/1 SP/2 4.9.2 Supported. IBM RS6000 AIX 4.1.1 (SMP) 10.0 01/95 No certification is planned. A new product release is required to support the SMP version of AIX 4.1.1, planned for a later date. IBM RS6000 AIX 4.1.1 (UP) 10.01 12/94 Passed. This certification is based on the uniprocessor version of the AIX 4.1.1 release. ICL DRS 6000 DRS/NX V7L2 10.0.0 Passed. Motorola SVR4 R$V4.1 10.0.0 Passed. NEC SVR4 R6.1 10.0.1 Passed. PC NetWare v 4.02 10.0.2 Supported. PC Netware 3.1.1 + SFT III 4.2 05/93 Failed. Severe OS problems were encountered during the certification. PC Netware 386 v 3.11 4.2 Passed. This certification also includes the maintenance release of SQL Server 4.2.1 & 4.2.2 PC Netware 386 v 3.12 4.2.2 03/94 Passed. Requires EBF# 2388. See release bulletin for NLMs and versions. PC Netware 386 v 4.0.1 4.2.2 12/93 Passed. Requires EBF #2404 and new CLIB.NLM versions. See ebf coverletter forthe CLIB.NLM version numbers and dates. PC Netware 4.0.2 4.2.2 12/94 Passed. Requires EBF 3617 or higher. PC OS/2 1.2.1 4.2 Passed. PC OS/2 1.3x.x 4.2 Passed. PC OS/2 2.0 4.2 Passed. CSD 6055 recommended PC OS/2 2.1 4.2 N/A, No plans to certify. PC SCO 3.2.4 4.2 Passed. PC SCO 3.2.4.2 4.2 11/93 Passed. Requires Rollup 1980 and SCO SLS patch UOD378a. Rollup resolves error 611 encountered during testing. Also includes support for SCO Open Server Enterprise System Release 3.0, and SCO Open Desktop Release 3.0. PC Windows NT 3.5 10.01 12/94 Passed, The following certification test run problems have been recorded in BTS: 64109 & 64110. PC SCO R3.2 V4.2 10.0.1 Passed. Pyramid N 1.1 93d067 10.0.1 Passed. Pyramid N 94d079 10.0.1 In Progress. Pyramid Nile D067 10.0 Passed. Binaries built on Pyramid S/C062 will run on this OS version, but binaries built on Pyramid Nile/D067 are not certified to run on Pyramid ES/C062. Pyramid S 1.1 93c062 10.0.1 Passed. Pyramid S 1.1 93c062 4.9.1 Passed. Pyramid S SVR4 C034 4.9 Passed. Pyramid S SVR4 C044 4.9 Passed. Pyramid S SVR4 C062 4.9 Passed. Pyramid T 5.1a 93a060 4.8 Passed. Pyramid T OSx 5.1A 4.8 Passed. SCO R3.2 V4.2 10.0.0 Passed. Sequent DYNIX 1.2 4.8 Passed. Sequent DYNIX 1.4 4.8 09/92 Passed. Sequent DYNIX 2.0 4.8 Passed. EBF binaries built on this OS version will not run on PTX 1.4 and below. Sequent DYNIX 2.0.1 4.8 Passed, EBF binaries built on PTX 2.0 and above will not run on PTX 1.4 and below. Sequent DYNIX 2.1 4.8 N/A, No plans to certify. Sequent DYNIX 2.1.0 10.0.0 Passed. Sequent DYNIX 2.1.0 4.8 Passed. Sequent DYNIX 4.0 10.0.1 Passed. Silicon Graphic IRIX 3.2 4.0.1 Passed. Silicon Graphic IRIX 5.1 10.0 Passed. Silicon Graphic IRIX 5.1.1.1 10.0.1 Passed. Silicon Graphic IRIX 5.2 10.0.1 07/94 Passed. Sony NEWS R6.0.1 10.0.1 Passed. Stratus FTX 2.2.2(y) 10.01 Passed. Stratus VOS 10.5 4.8 Passed. Stratus VOS 11.6 4.9 Passed. Stratus 68K VOS VOS 10.5 4.8 Passed. Stratus FTX FTX 2.2.2 10.0 Passed. Stratus FTX FTX 2.2.2.3 10.0.1 Passed. Sun4 Solaris 2.2 4.9.1 06/93 Passed. Requires Rollup 1794 or higher and SunOS patches 100999-09 and 101095-01. Sun4 Solaris 2.3 10.00 11/93 Passed. Sun4 Solaris 2.3 4.9.2 12/93 Passed. Sun4 Solaris 2.3 + SPARCstor 10.0.1 12/94 Passed. Required Sun patches: 102198-01 & 102199-02 for use with Sun's Volume Manager Sun4 Solaris 2.3 + SPARCstor 4.9.2 12/94 Passed, Required Sun patches: 102198-01 & 102199-02 for use with Sun's Volume Manager Sun4 Solaris 2.3 + SPX/IPX 10.0.2 01/95 Passed. Requires Rollup 4222. Bugs entered: sybinit(65592), doc(65605), connectivity(65807,65 817). PC Windows Net-Library EBF #4316 is required for connectivity bug #65807. PC Windows DB-Library (4.2) based clients can connect to the UNIX SQL Server. Sun4 Solaris 2.3-A+Edition 1 10.0 N/A, No Plans to certify Sun4 SunOS 4.1.1 4.2 Passed. Sun4 SunOS 4.1.1 4.8 Passed. Sun4 SunOS 4.1.1 + DBE 1.1 4.8 Passed. requires SunOS 4.1.1 patch #100293 or SunDBE 1.1 patch #100322. Sun4 SunOS 4.1.1 + DBE 1.1 4.9 02/92 Passed. Sun4 SunOS 4.1.2 4.0.1 Passed. Sun4 SunOS 4.1.2 4.2 Passed. Sun4 SunOS 4.1.2 4.8 Passed. requires SunOS 4.1.2 Kernel patch #100495 Sun4 SunOS 4.1.2 + DBE 1.2 4.2 08/92 Passed. Sun4 SunOS 4.1.2 + DBE 1.2 4.9 02/92 Passed. Sun4 SunOS 4.1.2 + DBE 1.2 4.9.1 Passed. Sun4 SunOS 4.1.3 4.2 N/A, No plans to certify. Sun4 SunOS 4.1.3 4.8 02/93 Failed. We recommend upgrade to the 4.9.1 server. Sun4 SunOS 4.1.3 4.9 03/93 Passed. Sun4 SunOS 4.1.3 4.9.1 10/92 Passed. requires SunOS 4.1.3 Sun-4m Supplement patch #100743-01. Patch is for multi-processor systems only. Sun4 SunOS 4.1.3 + DBE 1.3 10.02 02/95 Passed. EBF #4403 is required. Sun4 SunOS 4.1.3 + DBE 1.3 4.9 N/A, No plans to certify. Sun4 SunOS 4.1.3 + DBE 1.3 4.9.1 04/93 Passed. requires Sun-4m Supplemental patch #100743-01. Patch is for multi-processors and Sparc 10 systems only. Sun4 SunOS 4.1.3/sparc10 4.9.1 Passed. requires SunOS 4.1.3 Sun-4m Supplement patch #100743-01. Patch for multi-process systems only. Sun4 SunOS 4.1.3X 10.01 12/94 Passed. The 'X' in 4.1.3X indicates all of the 4.1.3 based releases. (eg 4.1.3c, 4.1.3.u1, 4.1.3u1b, etc.) Sun4 SunOS 4.1.3X 4.9.2 03/94 Passed. The 'X' in 4.1.3X indicates all of the 4.1.3 based releases. (eg 4.1.3c, 4.1.3.u1, 4.1.3.u1b). Unisys 65 SVR4 Ver 1.2 10.0.1 Passed. Unisys 65 SVR4 Ver 1.2 4.9.2 Passed. Unisys 75 Dynix 1.3.1 4.8 Passed. Unisys U6000 75 SVR4 1.3 4.8 Passed. Unisys U6000/65 SRV4 1.2 4.9.2 Passed. Bug 61483 - Duplicate Rows in syskeys Problem select * from syskeys shows apparent duplicate rows. This should, in theory, not be possible, since syskeys has a clustered index on it, which should prevent duplicate rows. These duplicates mean that some stored procedures do not report errors when they should. Explanation When an insert is done to a table on which there is a clustered index, SQL Server does a byte-by-byte comparison of the entire row. If it finds a duplicate row, it rejects the insert. SQL Server assumes that the complete area between the start and the end of the data is valid. This assumption turns out to be correct for all tables, except syskeys. The system catalog for syskeys contains a column that can't be displayed, which contains random or uninitialized data. It is possible for two rows with identical data in the visible columns to differ in the contents of the hidden column. This problem occurs on 4.9.x, 10.0.1, and 10.0.2 SQL Server. Solution You can correct this problem by running a script that exposes the column (it is called spare1, and is a 2-byte integer column) and binds a default of zero to it. This prevents any future occurrences; the script also corrects any existing duplicates in syskeys. The change made by this script will not break any of the system stored procedures that refer to syskeys; this is because the default on the new column ensures that any insert that does not specify the new column gets a default value of zero. The script also does sanity checks to see if it has already been run. The scripts are available as Rollups for the following platforms and releases: Table 4: Rollups correcting bug 61483 Rollup Platform Release Number Number 3938 SunOS Release 4.x (BSD) 10.0.2 3939 Sun Solaris 2.x 10.0.2 3940 HP 9000 Series 300 HP-UX 10.0.2 3941 AT&T (NCR) System 3000 10.0.2 3942 IBM RISC System/6000 AIX 10.0.2 3943 DEC OSF/1 10.0.2 3944 SunOS Release 4.x (BSD) 4.9.2 3945 Sun Solaris 2.x 4.9.2 3947 HP 9000 Series 300 HP-UX 4.9.2 3948 AT&T (NCR) System 3000 4.9.2 3949 IBM RISC System/6000 AIX 4.9.2 These scripts are available only as controlled releases. Many customers are not doing operations that directly involve syskeys, and Sybase Engineering prefers that the problem be sorted out for you transparently on upgrade to 10.0.3. If you cannot wait, however, Sybase has provided the script as a Rollup that you can order from Technical Support. In order to minimize confusion, the scripts are all the same and will automatically sort out differences between the SQL Server releases. Release 10.0.2 will run against all 10.0.1 P2 and 10.0.2 SQL Servers. For Windows NT, Novell NetWare, and other PC platforms, there are no recorded instances of this problem. Q5.5: TECHNICAL NEWS Volume 4, Number 3 Auguts, 1995 _________________________________________________________________ Disclaimer: No express or implied warranty is made by SYBASE or its subsidiaries with regard to any recommendations or information presented in SYBASE Technical News. SYBASE and its subsidiaries hereby disclaim any and all such warranties, including without limitation any implied warranty of merchantability of fitness for a particular purpose. In no event will SYBASE or its subsidiaries be liable for damages of any kind resulting from use of any recommendations or information provided herein, including without limitation loss of profits, loss or inaccuracy of data, or indirect special incidental or consequential damages. Each user assumes the entire risk of acting on or utilizing any item herein including the entire cost of all necessary remedies. Staff Principal Editor: Leigh Ann Hussey Contributing Writers: Lance Anderson, Perry Bent, Chris Curvey, Sandra Dellafiora, Jennifer Doman, Peter Dorfman, Roy Halbreich, Dan Haywood, Marcus Jordan, Greg Klinder, Jeff Lichtman, Rosemary Morrison, Greg Roody, Marc Sugiyama, Loretta Vibberts, Rob Weaver, Robert Weeks, Elizabeth Whitehouse, Elton Wildermuth Send comments and suggestions to: SYBASE Technical News, 6475 Christie Avenue, Emeryville, CA 94608 or technews@sybase.com This issue of the SYBASE Technical News contains new information about your SYBASE software. If needed, duplicate this newsletter and distribute it to others in your organization. All issues of the SYBASE Technical News and the Troubleshooting Guides are included on the AnswerBase CD. To receive this document by regular email, send name, full internet address and customer ID to technews@sybase.com. _IN THIS ISSUE:_ Tech Support News/Features OpenLine News Technical Support North American Holidays Sybase Character Sets CD-ROM Available SQL Server Performance Tip for Inserts with Non-Unique Clustered Indexes How to Read syscolumns.status Patches Required to Run SQL Server 10.0.2 with DEC OSF/1 Version 3.0 Additional Information for 1605 Workaround DBCC LOG with truncate log set Raises Spurious 813 and 605 Errors Two New HP Patches for SQL Server 10.0.1 Enabling Asynchronous I/O for HP-UX 10.0 Moving Log Off a Device Query to List All Objects & Sizes on Segment How to Read SQL Server Version Strings Generating Templates for Non-interactive Install with ^W Optimizer Frequently Asked Questions Currently Supported Backup Server Devices Connectivity / Tools / PC MS Access Errors with Intersolv Sybase System 10 ODBC Driver Workaround for non-char NULLs in OmniSQL Server Dynamic Parameter Markers in Embedded SQL Certification and Fixed Bug Reports Bug 69427 - sp_rename Syntax Bugs 31229 and 58701 - sp_addmessage SQL Server 4.9.2 EBFs 4152 and 4659 and Bugs 54367 and 68128 Connectivity Documentation Corrections Latest Rollups for SQL Server 4.9.2 Bugs Fixed in Latest Rollup, SQL Server 4.9.2 Latest Rollups for SQL Server 10.0.2 Bugs Fixed in Latest Rollup, SQL Server 10.x OpenLine News On April 5th, Sybase OpenLine on CompuServe unveiled a new section called "EBF EXPRESS." This section allows customers with current support contracts who have licensed Sybase PC Client products to view the latest EBF Cover Letters and download the latest EBFs for those products. The sections covered are: Status Section Number Section Name Description _________________________________________________________________ open 23 PC Client Windows, DOS, OS/2 Client EBFs soon 22 Novell Novell Workgroup Server EBFs soon 21 NT NT Server EBFs planned 20 UnixWare The sections are accessible by application only. Sybase OpenLine members may download the file called ACC-23.APP in the General Technical (#1) library, fill it out, and return it to the SysOp via CompuServe Mail. Applications are usually processed the same day, but allow 24 to 48 hours for a response by CompuServe Mail. There is also a corresponding message base for EBF EXPRESS to report problems or ask questions about the EBFs or Cover Letters. The Sybase OpenLine Team is pleased to be able to offer a new level of self-sufficiency to our customers through EBF EXPRESS; we have already received very positive comments about the service! As of April 10th, over 19,400 CompuServe members have joined the Sybase OpenLine forum, and over 28,700 messages have been posted (a rate of about 72 per workday). There are over 1,200 technical tips and informational files in the Sybase OpenLine libraries; over 106,900 downloads have been recorded. Sybase OpenLine continues to be a great tool for networking with other Sybase customers, analysts, consultants, and employees to solve customer problems. To join CompuServe, in the United States call (800) 848-8990; outside the U.S., dial (+1) 614-529-1340 To join the Sybase OpenLine Forum, type: _GO SYBASE_ Questions about the Sybase OpenLine forum or EBF Express may be addressed to: Perry Bent, 73321.374@compuserve.com Technical Support North American Holidays Sybase Technical Support is open on all holidays and provides full service on many holidays. On the holidays shown below, Technical Support provides limited service. During limited-service holidays, the following Technical Support coverage will be provided: · SupportPlus Preferred and Advantage Customers may log all cases; priority 1 and 2 cases will be worked on over the holiday. · 24x7 and 24x5 Support Customers may log priority 1 (down server) cases which will be worked on over the holiday. · SupportPlus Standard, Desk Top, and Regular Support Customers may purchase Extended-hour Technical Support for coverage over the holiday. Table 1: Sybase Technical Support limited-service holidays - U.S. customers Holiday Date ------- ---- New Year's Day January 2 President's Day February 20 Memorial Day May 29 Independence Day July 4 Labor Day September 4 Thanksgiving November 23 Christmas December 25 Table 2: Sybase Technical Support limited-service holidays - Canadian customers Holiday Date ------- ---- New Year's Day January 2 Good Friday April 14 Victoria Day May 22 Canada Day (observed) June 30 Labour Day September 4 Canadian Thanksgiving October 9 Christmas Day December 25 Boxing Day December 26 Sybase Character Sets CD-ROM Available The Sybase Character Sets CD-ROM provides new character sets and sort order files that enable SQL Server to handle data in 26 different character sets. This new product allows you to create applications in your local language(s) and handle them correctly from a cultural and lexical point of view. One or more sort order files are provided for use with each character set, allowing you to correctly sort the data in your SYBASE database. In addition, the character sets are compatible with the automatic character set conversion feature available in SYBASE SQL Server(TM) release 10.0.2 and later. The initial release of the Sybase Character Sets CD-ROM contains character sets for the following regions and languages: · Arabic · Chinese (Simplified and Traditional) · Cyrillic (Russian and other Cyrillic-based languages) · Central and Eastern European · Greek · Hebrew · Japanese · Korean · Turkish · Western European The product order number for the Sybase Character Sets CD-ROM is 10490-55-0100-41. Frequently Asked Questions Question: Are JLE and KLE supported? Answer: Yes. The Sun Japanese Language Environment (JLE) and Sun Japanese Feature Package (JFP) are supported by the eucjis character set. The Sun Korean Language Environment (KLE) and Sun Korean Feature Package (KFP) are supported by the eucksc character set. Question: I thought Sybase already supported Chinese--why do I need this product too? Answer: The Chinese character sets provided on the Sybase Character Sets CD-ROM provide a more robust support for both traditional and simplified Chinese character sets than has been possible in the past using the ascii_8 character set. For example, pattern matches using the like SQL statement on multibyte characters are now handled properly. Question: What are the disk and RAM requirements? Answer: The entire content of the CD-ROM is less than 6MB. A typical user will install only a few of the character sets and sort orders, requiring only five or six files. A small amount (one to five 2K pages) of additional memory may be required when moving from ascii_8 to an Asian character set. In most cases, the additional memory use is insignificant. Question: What SQL Server versions support Sybase Character Sets? Answer: The character sets and sort orders are supported on any System 10 server. However, code set conversions for these new character sets are only supported in SQL Server 10.0.2 or later. Question: What standards were used to create the character sets and sort orders? Answer: Character sets are standardized by organizations such as the International Standards Organization (ISO), national bodies such as the Arabic Standards and Metrology Organization (ASMO) and the Hellenic Organisation for Standardization (ELOT), and software vendors. Each character set description file references the organizational standard or vendor standard on which it is based. Performance Tip for Inserts with Non-Unique Clustered Indexes In order to avoid hotspots on tables, many customers choose a clustered index that will spread inserts across data pages, instead of adding all of them on the last page. When this index is non-unique, you can achieve a significant performance benefit by adding an IDENTITY column to the index and making it unique. Example This example was tested using SQL Server 10.0.2 on an HP9000 E-Series machine: create table nu ( spid smallint not null , something float not null , padding char(250) not null , spreader numeric(30,0) identity ) go create unique clustered index nu_ix on nu (spid, spreader) go Two stored procedures were written to insert records. create proc insert_nu ( @count int) as begin declare @tmp int select @tmp = 0 while @tmp < @count begin insert into nu (spid, something, padding) values (@@spid, @count, " ") select @tmp = @tmp + 1 end end go create proc insert_u ( @count int) as begin declare @tmp int select @tmp = 0 while @tmp < @count begin insert into u (spid, something, padding) values (@@spid, @count, " ") select @tmp = @tmp + 1 end end go Both tables were populated with 5000 records. Test #1 One process performing 5000 inserts: * Non-unique index: 5 minutes, 14 seconds * Uniqueindex: 12 seconds Test #2 Two concurrent processes, each performing 5000 inserts: * Non-unique index: + Process 1: 9 minutes, 53 seconds + Process 2: 9 minutes, 42 seconds * Unique index: + Process 1: 20 seconds + Process 2: 21 seconds How to Read syscolumns.status Question How is the status column used in the syscolumns table, what does it mean, and what are its possible values? Answer Here is a sample of output containing syscolumns.status: 1> select name, status from syscolumns where id=object_id("tab1") 2> go name status ------------------------------ ------ pub_id 0 pub_name 8 city 8 state 8 bio 0 According to the header file that defines syscolumns: · Bits 0 - 2 indicate bit position in a byte (bit 0 - 7). * If you have bit columns in your table, as many as 8 of them will share the same offset, because 8 bits fit in a byte. The status column lets SQL Server find the actual position of those bits in that byte. · Bit 3 is set if NULL is allowed in this column. * If status contains an 8, this column allows NULL. · Bit 4 is set if there is more than one check constraint on the column. * If status contains a 16, this column has more than one check constraint. SQL Server stores the first check constraint in syscolumns.domain, but if there's a second one, SQL Server simply notes that fact by setting the status bit. Patches Required to Run SQL Server 10.0.2 with DEC OSF/1 Version 3.0 There are two sets of official operating system patches required in order to run SQL Server 10.0.2 for DEC OSF/1v3.0 (product ID 19555). You must request these patches from Digital. V3.0 and V3.0b require different sets of patches. The newly shipped V3.2 does not need any patches. · DEC patches for V3.0 * OSFV30-068-2 * OSFV30-094-1 · DEC patches for V3.0b * OSFV305-068-1 * OSFV305-094-1 Patch OSFV30-094-1 (for V 3.0) is required for all multiprocessor machines, machines running SYBASE database applications, and all machines using the "ITIMER_VIRTUAL" timer kernel feature. Without this patch various panics may occur, the most likely of which is: "simple_lock: time limit exceeded". The problem happens because multiple Asynchronous System Traps (AST) functions are mapped to the same bits of the AST dispatch field in the process control block (PCB). The fix removes the collision of bit usages for ASTs. You must rebuild your kernel after installing these patches. Additional Information for 1605 Workaround Question The SQL Server Error Messages manual describes a workaround for Error 1605 that involves an adjustment to cnsocket. After using those instructions, I got the following error message in the SQL Server error log: 00:95/04/13 19:58:51.91 kernel kiconfig: configured number of processes (60) is insufficient for kernel disk and network processes (1032 needed). What is going on here? Answer SQL Server administrators configure SQL Server through the sp_configure stored procedure and the buildmaster -y mechanism. Many of the parameters in buildmaster -y are direct copies of parameters displayed by sp_configure. Some values in buildmaster -y are calculated (cnsocket), while others are preset "at the factory" (cschedspins). In certain situations, the values computed for the calculated configuration values are incorrect and lead to errors during startup or operation of SQL Server. For SQL Server to accept a connection it must have: · An available network socket · An available "sybase kernel process," also called a "task" cnsocket defines the maximum number of network sockets SQL Server may have open at one time. This value is further constrained by the limitations of the operating system. Each network socket requires a file descriptor from the operating system. SQL Server automatically sets cnsocket to the maximum number of file descriptors allowed if you try to set cnsocket to a value greater than the maximum number of file descriptors allowed. cnsocket is calculated as: cnsocket = cusrconnections + cfgsitebuf + XTRA_SOCKETS; where · cusrconnections - number of configured user connections · cfgsitebuf - number of connections reserved for site handlers · XTRA_SOCKETS - extra connections for utility sorts of things (for ex ample, network listeners) cnproc defines the maximum number of SYBASE kernel tasks that can run at a time. cnproc is calculated as: cnproc = cnsocket + 6 + (number of network types) where · cnsocket - as calculated above · 6 - magic number; extra processes that do not require network connections (for example, checkpoint) · number of network types - number of network listeners If, during SQL Server startup, cnproc is less than the expected value, you get the error message shown above. This means SQL Server will run out of SYBASE kernel tasks (set at 60) before it runs out of sockets (set at about 1032). The processes referred to in the SYBASE kernel message above refer to SYBASE SQL Server kernel processes (tasks), not operating system processes. Changing the operating system nproc won't solve the problem. nproc is not a configuration value in all operating systems, but is common enough that some may confuse it with cnproc in the SYBASE configuration. The operating system kernel is not the SQL Server kernel. cnsocket and cnproc should be configured to values appropriate for the number of concurrent connections required in your SQL Server. You are not expected to set this value, and any calculated value that you set by hand may be lost if you do a reconfigure command. Adjusting cnsocket is really a workaround; using buildmaster -ycnsocket should not actually be necessary. Sybase Technical Support has logged a bug (#70028) to change the way the "reconfigure" code and the startup code adjust these parameters. SYBASE Technical News will keep you informed of any new developments in this matter. DBCC LOG with truncate log set Raises Spurious 813 and 605 Errors As an addendum to the article in SYBASE Technical News, Volume 4, Number 2, concerning dump transaction failing to truncate the log, customers should be aware that the use of dbcc log in databases with the dboption truncate log on checkpoint set can result in spurious 813 and 605 errors. To avoid this, turn off truncate log on checkpoint before you run dbcc log. Two New HP Patches for SQL Server 10.0.1 The certification report in SYBASE Technical News, Volume 4, Number 2, reported under "HP-UX 9.0.4 using LVM with SQL Server 10.0.1," that two patches were required. These have been superseded, as follows: · Patch PHKL_3624 has been superseded by PHKL_5139 · Patch PHKL_4418 has been superseded by PHKL_5394. This change is reflected in the current certification report. Enabling Asynchronous I/O for HP-UX 10.0 Question I have upgraded to HP-UX 10.0, and wish to enable asynchronous I/O in order to run SQL Server, but the installasync80 script is only for HP-UX 8 and 9. Does a script exist for this new version, or must I configure asynchronous I/O manually? Answer You need to order EBF 4605 from Sybase Technical Support. This EBF contains both a new asynchronous I/O driver, as well as a fix for the problem of the Backup Server not recognizing non-rewinding tape devices. Moving Log Off a Device Question I need to have one of my users change the device that the log is on, but I am not sure of the appropriate procedure. The sysusages looks like this: 1> select * from sysusages where dbid=db_id("user_db") 2> go dbid segmap lstart size vstart pad unreservedpgs _________________________________________________________________ 7 3 0 51200 83887104 NULL 3800 7 3 51200 20480 83963904 NULL 20480 7 4 71680 4096 83984384 NULL 4088 7 4 75776 10240 151006208 NULL 10240 From this it appears that the user built the database as follows: create database user_db on dev1 = 100, dev1 = 40 log on dev1 = 8 (using the 'with override option' if using System 10, since in System 10 you cannot by default create a database with its log on a device that has data allocation) and then gave the command: alter database user_db log on dev2 = 20 mb Answer The desired end result is not clear. If you simply want to have the log on a separate device with no change in the amount of allocation for that database, then follow these steps: 1. Execute sp_dropsegment to get the log segment off dev1. 2. Execute sp_extendsegment to put system and default segments onto the 8 MB of dev1 previously used for the log. You need to do this, because if there are any pages still allocated for the log on that device, you will get misleading information from sp_helplog, which checks on the location of the log's first page. 3. Make some changes within the database, such as an insert of some dummy records. This puts some log records on at least one page of dev2 which is now the one and only log device. Note that dev2 is a dedicated log device, as indicated by the segmap value of 4. 4. Execute dump transaction with truncate_only. Query to List All Objects and Sizes on a Segment Question Is there a way to list all the objects on a segment and the amount of space they use on that segment by a query against system catalogs? Answer Use the following query: select object=o.name, 'index'= i.indid, segment=s.name, size=reserved_pgs(i.id, i.doampg) + reserved_pgs(i.id, i.ioampg) from sysobjects o, syssegments s, sysindexes i where o.id = i.id and s.segment = i.segment order by s.name, o.name, i.indid compute sum(reserved_pgs(i.id, i.doampg) + reserved_pgs(i.id, i.ioampg)) by s.name Size is reported by this query in pages; see the "System Functions" section of Chapter 2, "Transact-SQL Functions", in Volume 1 of the SQL Server Reference Manual for more details about the reserved_pgs function. To restrict the query to a given segment or object, add that segment or object to the where clause. Also, this query does the compute sum by segment, not by object and segment; to change to by object and segment, add o.name to the by clause. _NOTE:_ Because syslogs (object 8) has no object allocation map (OAM), the system function reserved_pgs() looks for it explicitly, and the query as written will return double the actual size of the log segment. For all other objects, the size value returned by this query is correct. How to Read SQL Server Version Strings Question The dataserver, backupserver, and isql binaries return version information when executed with a -v flag. What do the different fields mean? For example, in UNIX: /usr/sybase/bin/dataserver -v SQL Server/4.9.2/EBF 1795 Rollup/Sun_svr4/OS 5.1/1/OPT/Fri Jul 30 _20:01:16 PDT 1993_ /usr/sybase10/bin/dataserver -v SQL Server/10.0/P/Sun_svr4/OS 5.2/1/OPT/Thu Sep 23 12:28:52 PDT 1993 /usr/sybase10/bin/backupserver -v Backup Server/10.0.2.1/P/Sun4/4.1.x/EBF 4009/OPT/Tue Nov 15 14:27:37 _PST 1994_ /usr/sybase10/bin/isql -v isql/10.0/P/sun_svr4/Solaris 2.2/1/Mon Aug 30 12:11:43 PDT 1993 Or on VAX: server :== $sybase_system:[sybase.bin]dataserver.exe server/version SQL Server/4.9.2/P/VMS/5.4-1A+/1/OPT/28-JUN-1993 18:37:38.53 Answer The fields, in order, are: 1. Product 2. Release number 3. Release type: production (P), beta, or EBF version 4. Platform identifier 5. Operating system version under which the binary was compiled. This number is hard coded; products do not look it up from your OS. 6. "Build number"; a SYBASE internal reference only 7. Mode: optimized or debug 8. Compilation date and time Generating Templates for Non-interactive Install with ^W Question I am installing a large number of identical release 10.x SQL Servers at my branch offices. What is the easiest way to do this? Answer Once you have configured one SQL Server using an interactive sybinit session, you may dump the attributes out to a file, which you can then edit to suit each individual site. Go through a full interactive configuration session using sybinit, so that each of the nine steps on the screen are marked "Complete": SQL SERVER CONFIGURATION 1. CONFIGURE SERVER'S INTERFACES FILE ENTRY Complete 2. MASTER DEVICE CONFIGURATION Complete 3. SYBSYSTEMPROCS DATABASE CONFIGURATION Complete 4. SET ERRORLOG LOCATION Complete 5. CONFIGURE DEFAULT BACKUP SERVER Complete 6. CONFIGURE LANGUAGES Complete 7. CONFIGURE CHARACTER SETS Complete 8. CONFIGURE SORT ORDER Complete 9. ACTIVATE AUDITING Complete At this prompt: Enter the number of your choice and press return: type Ctrl-w, and the prompt line will change to look like this: Enter the number of your choice and press return: Dump out current attributes?y "y" is the default, so just hit return, and you'll be prompted for a file name; you can either enter a new one or accept the default. Once you have this resource file, consult Appendix A, "sybinit Resource Files", of the SYBASE SQL Server Installation Guide for your platform, for information on how to use it for future, non-interactive installations. You will need to edit some of the attributes, as the appendix instructs. Generate a second resource file for your Backup Server(TM), if you wish to handle Backup Server installation similarly. Optimizer Frequently Asked Questions Question I've heard that the optimizer works out the cost, on the fly, of actually producing the best plan, as well as the cost of getting the data. Once the cost of actually working out the best plan becomes greater than the cost of performing the data access from the best plan found so far, it stops optimizing. Therefore, if it costs the optimizer 1 CPU second to determine that a given plan will require 1/2 CPU second, it stops optimizing, since 1 is bigger than 1/2, even though a plan it hadn't yet analyzed might require only 1/4 CPU second. Is this true? Answer The SYBASE optimizer does not work like this, though a competitor's does. There are very good reasons not to optimize this way: · Queries that are in stored procedures and triggers get optimized once and executed many times. For example: 1. Optimizer takes 1 second to find best query plan, which takes 0.1 seconds to execute. 2. Optimizer takes 1/2 second to find a sub-optimal query plan, which takes 1/2 second to execute. Scenario 1 will take 1.1 seconds if it is executed once, while scenario 2 will take 1 second. But if the query is in a stored procedure that is executed 1000 times, scenario 1 will take 101 seconds, while scenario 2 will take 500.5 seconds. · The competitor's method limits the time the optimizer spends analyzing different query plans depending on the load on the server and the machine. If the machine is heavily loaded, the optimization process takes longer and looks at fewer query plans before giving up. Thus, the heavier the load, the worse the query plan, which is exactly the opposite of what you want. Question Isn't it the case that, once the number of tables in a join exceeds four, the optimizer starts taking short cuts and will not evaluate all the possible plans? If so, is it a good idea to write my query with the "driving" tables first, when a query has a large number of joins? Answer Changing the order of the tables in the from clause will normally have no effect on the query plan. When you have more than four tables in the from clause, the optimizer will optimize each subset of four tables. Then it remembers the outer table from the best plan involving four tables, eliminates it from the set of tables in the from clause, and optimizes the best set of four tables taken out of the remaining tables. It continues with this until there are only four tables remaining, at which point it optimizes those four tables normally. For example, suppose you have a select with the following from clause: _FROM T1, T2, T3, T4, T5, T6_ The optimizer looks at all sets of four tables taken from these six tables. The sets are: T1, T2, T3, T4 T1, T2, T3, T5 T1, T2, T3, T6 T1, T2, T4, T5 T1, T2, T4, T6 T1, T2, T5, T6 T1, T3, T4, T5 T1, T3, T4, T6 T1, T3, T5, T6 T1, T4, T5, T6 T2, T3, T4, T5 T2, T3, T4, T6 T2, T3, T5, T6 T2, T4, T5, T6 T3, T4, T5, T6 For each one of these combinations, it looks at all the join orders (permutations). For example, the join orders for T2, T3, T5, T6 are: T2, T3, T5, T6 T2, T3, T6, T5 T2, T5, T3, T6 T2, T5, T6, T3 T2, T6, T3, T5 T2, T6, T5, T3 T3, T2, T5, T6 T3, T2, T6, T5 T3, T5, T2, T6 T3, T5, T6, T2 T3, T6, T2, T5 T3, T6, T5, T2 T5, T2, T3, T6 T5, T2, T6, T3 T5, T3, T2, T6 T5, T3, T6, T2 T5, T6, T2, T3 T5, T6, T3, T2 T6, T2, T3, T5 T6, T2, T5, T3 T6, T3, T2, T5 T6, T3, T5, T2 T6, T5, T2, T3 T6, T5, T3, T2 The optimizer remembers the best of all the join orders, for all the combinations. Let's say that the best join order is: T5, T3, T6, T2 Once the optimizer has figured this out, it remembers the outer table of this join order (T5) as the outermost table in the whole query. It eliminates this table from consideration as it chooses the rest of the join order. Now it has to decide where T1, T2, T3, T4, and T6 will go in the rest of the join order. It looks at all the combinations of four tables chosen from these five: T1, T2, T3, T4 T1, T2, T3, T6 T1, T2, T4, T6 T1, T3, T4, T6 T2, T3, T4, T6 It looks at all the join orders for each of these combinations. It remembers that T5 is the outer table in the join when it makes this decision. Let's say that the best join order is T3, T6, T2, T4. It remembers T3 as the next table after T5 in the join order for the entire query, and eliminates it from consideration as it chooses the rest of the join order. The remaining tables are: T1, T2, T4, T6 Now there are only four tables, so it looks at all the join orders for all the remaining tables. Let's say the best join order is: T6, T2, T4, T1 This means that the join order for the entire query is: T5, T3, T6, T2, T4, T1 Note that the four outermost tables are not the first four tables in the from clause. As you can see, even though the optimizer looks at the join orders for only four tables at a time, doing this for all combinations of four tables taken out of the from clause makes the order of tables in the from clause irrelevant. There is only one known circumstance in which the order of tables in the from clause can make any difference: if the optimizer comes up with the same cost estimate for two join orders, it will choose the first of the two join orders that it encounters. The order of tables in the from clause affects the order in which the optimizer evaluates the join orders, so in this one case, table order can have an effect on the query plan. Currently Supported Backup Server Devices As of April 26, 1995, the following devices are supported by Backup Server: Table 3: Supported Backup Server devices Platform SQL Server Multifile Single-file Comments version dump devices dump devices -------- ---------- ------------ ------------ -------- AIX 10.0.2 4-mm 2GB QIC In order to do 8-mm Disk multifile dumps, a 8-mm 5GB tape device MUST be 9-track an IBM device and have an entry in the ODM -------------------------------------------------------------- Data General 10.0.2 4-mm QIC For 8-mm support, 8-mm Disk you must install DG Clariion EBF 3963 and DG/UX tape array patch NSQA-18785-0 -------------------------------------------------------------- HP-UX 10.0.2 4-mm QIC 9-track Disk -------------------------------------------------------------- ICL 10.0.2 8-mm QIC Disk -------------------------------------------------------------- Windows NT 10.0.2 4-mm QIC Disk Floppy -------------------------------------------------------------- NEC 10.0.1 none QIC Disk -------------------------------------------------------------- NetWare 10.0.2 8-mm QIC Disk Floppy -------------------------------------------------------------- NCR 10.0.2 4-mm QIC You must install NCR 8-mm Disk OS patch PCSCI Floppy version 94.04.25.19 in order to use 1/4-inch cartridges. If you plan to use the HP DAT 4-mm, you must install NCR OS patch PSCSI203. -------------------------------------------------------------- OS/2 10.0.2 4-mm QIC 8-mm Disk Floppy -------------------------------------------------------------- OSF 10.0.2 4-mm Disk -------------------------------------------------------------- Pyramid 10.0.1 8-mm QIC 9-track Disk -------------------------------------------------------------- SCO 10.0.2 4-mm QIC For multifile dump 8-mm Disk support, you must install EBF 3472 and SCO OS Patch UD0376B -------------------------------------------------------------- Sequent 10.0.1 8-mm QIC 9-track Disk -------------------------------------------------------------- SGI 10.0.2 4-mm QIC Bug 67345 currently Disk prevents support of 8-mm tape devices -------------------------------------------------------------- Solaris 10.0.2 4-mm Python QIC In order to use 4mm 4-mm 6130HS RDAT Disk devices, you MUST 4-mm WangDAT Floppy add the following 8-mm line to /etc/system: 9-track set st:st_report_soft_ errors_on_close=0 You must then restart your system for the change to take effect. -------------------------------------------------------------- Sony 10.0.1 none QIC Disk -------------------------------------------------------------- Stratus FTX 10.0.1 9-track QIC Disk -------------------------------------------------------------- SunOS 10.0.2 4-mm Python QIC You must have SunOS 4-mm 6130HS RDAT Disk 4.1.3_U1 or later in 4-mm WangDAT Floppy order to do multi- 8-mm file dumps to 4-mm 9-track devices. -------------------------------------------------------------- Unisys 10.0.1 none QIC Disk -------------------------------------------------------------- UnixWare 10.0.2 8-mm Disk -------------------------------------------------------------- OpenVMS 10.0.2 4-mm QIC 8-mm Disk 9-track Floppy _________________________________________________________________ MS Access Errors with Intersolv Sybase System 10 ODBC Driver Question I'm using the Intersolv Sybase System 10 ODBC driver with a Windows PC client. When I try to set up a new data source, edit an existing one, or edit a connection to a database within Microsoft Access, I get the following error: The setup routines for the INTERSOLV Sybase System 10 ODBC driver could not be loaded. You may be low on memory and need to quit a few applications. What can I do? Answer This error is misleading. It doesn't usually have anything to do with memory, but instead happens when wsybset.bat has not been run. wsybset.bat sets up the correct paths to the Sybase 10.0.x Open Client (TM) Dynamic Load Libraries (DLL) wcslib.dll and wctlib.dll. It can also happen if the path to the ODBC driver, QESYB06.dll, is not set up. This is because when the ODBC driver gets set up, it sets a DLL name in the odbc.ini file for the control panel application to use. Then, when Windows tries to load the DLL, it fails if the program or any DLLs needed by that program cannot be loaded. Lack of memory can cause this, as the message suggests, or it can occur because Windows cannot find one or more of the DLLs. Windows looks for DLLs by searching the current directory, then the WINDOWS/SYSTEM directory, and finally the path. If either the SYBASE DLL directory or the Intersolv DLL directory is not in the path, you will see this message. Refer to your operating system manual for further instructions on how to set paths. Workaround for non-char NULLs in OmniSQL Server Question I have a problem with OmniSQL Server(TM). I'm going to DB2, but the problem could appear for any source. Omni SQL Server, as it is based on C-ISAM files, does not support NULLs for anything other than character fields. So this query: SELECT mydatefield FROM mytable produces correct answers, but this one: SELECT mydatefield FROM mytable ORDER BY ... produces the value of "1 Jan 1900" for any NULL dates. The difference is that OmniSQL Server is producing a worktable (in C-ISAM) to perform the order by. Is there a workaround for this? Answer Yes. You must create an Open Server(TM) gateway that sits in front of Omni. The client's requests are passed up to OmniSQL Server unchanged, but any "1/1/1900" dates are changed to NULL on the way back down. This workaround will be unnecessary when OmniCONNECT release 10.5 (formerly known as OmniSQL Server) is available, because at that point Sybase plans to replace C-ISAM with SQL Server-style storage structures. The code for this Open Server is easily produced. Add the following code to the gateway Open Server sample file, $SYBASE/sample/srvlibrary/ctosdemo.c, to the row processing in gw_procrows() (about line 4297): if((fmtp[i].datatype == CS_DATETIME_TYPE) && (fmtp[i].status & CS_CANBENULL ) ) { if (0==dp[i][0]) if (0==dp[i][1]) if (0==dp[i][2]) if (0==dp[i][3]) if (0==dp[i][4]) if (0==dp[i][5]) if (0==dp[i][6]) if (0==dp[i][7]) ind[i]=CS_NULLDATA; } if((fmtp[i].datatype == CS_DATETIME4_TYPE) && (fmtp[i].status & CS_CANBENULL ) ) { if (0==dp[i][0]) if (0==dp[i][1]) if (0==dp[i][2]) if (0==dp[i][3]) ind[i]=CS_NULLDATA; } The code could be extended in many ways, for example to perform srv_props() for NUM_CONNECTIONS and other variables. Note that this code does not handle NULLable numeric values. Dynamic Parameter Markers in Embedded SQL The following addition will shortly be appearing in the Release Bulletin for all Embedded SQL products: When processing an Embedded SQL query, SQL Server determines the datatype of each dynamic parameter marker by examining the marker's context-where the marker is used in the query. An Embedded SQL query must not contain a dynamic parameter marker in a position that can cause SQL Server to determine that the marker's datatype is text or image. Bug 69427 - sp_rename Syntax If you have both a table column and an index with the same name, and execute the command sp_rename `table.column/index', newname, sp_rename will always rename the column. You cannot rename the index directly in this case. This problem, that the syntax is the same for renaming a column or index, has been filed as bug 69427; SYBASE Technical News will keep you abreast of its progress. Bugs 31229 and 58701 - sp_addmessage Two bugs have recently been reported against sp_addmessage: Bug 31229 If you specify us_english as the language parameter to sp_addmessage, the langid column of sysusermessage gets set to NULL. A later retrieval via sp_getmessage without a language specified (where the session is using us_english) fails to find the message. The problem is that there are two legal values for langid that represent us_english: 0 and NULL. @@langid will always return 0, while sp_addmessage and sp_getmessage set langid to NULL if the user specified us_english directly to the procedure. Bug 58701 A related problem reported under bug 58701 resulted in a change to the procedure sp_addmessage for the 10.0.2 maintenance release. The problem is that sp_addmessage inserts a value of 0 into langid instead of NULL when using us_english. Apparently the change required to fix this bug has caused bug 31229 to appear again. If a message is added with sp_addmessage and no language is specified, later attempts to get that message with sp_getmessage will fail with error 17202, "Message number nn does not exist in the language language." To work around these bugs, manually update rows in sysusermessages, setting langid values from NULL to 0. SQL Server release 4.9.2 - EBFs 4152 and 4659 and Bugs 54367 and 68128 Some customers have encountered problems on SQL Server 4.9.2 that can be resolved by the fix for bug 54367. This fix has been released in two different modes, however, which may confuse some customers. A customer who received EBF 4152 in order to obtain the fix for bug 54367 may find the EBF 4659 seems to no longer address the problem. Here's why: EBF 4152 (and related EBFs for non-Sun4 platforms) was a recent 4.9.2 rollup, containing a fix for bug 54367. For reasons unrelated to this bug fix, shipment was stopped for EBF 4152. EBF 4659 is the replacement EBF for the 4152 rollup. It also contains a fix for bug 54367-but this fix is not identical to the fix in EBF 4152. The difference between the two fixes is that, with EBF 4659, trace flag 299 must be turned ON in order to activate the fix. This feature was added to EBF 4659 under the fix for bug 68128. If this trace flag is not turned on, EBF 4659 behaves just like all pre-4152 releases of the 4.9.2 codeline (except for a few EBFs that contain a fix for bug 54367). You should find similar behavior in the following cases: · EBF 4152 · EBF 4659 with trace flag 299 turned on At this time, there is no plan to modify the 4.9.2 codeline to apply the fix for bug 54367 without invoking trace flag 299, because this fix also incurs a small, but possibly significant, amount of processing overhead. Since most customers do not require this fix, Sybase decided to make application of the fix optional, and to put control over whether or not it is applied in the customer's hands, with a trace flag. A problem similar to bug 54367 was reported against System 10 under bug 55952. A fix for this problem is contained in the 10.0.2 release. An additional fix has been made recently to the System 10 codeline to mitigate some of the performance and contention problems caused by the fix for bug 55952. This problem and fix is reported under bug 65416. There is no 299 trace flag in the System 10 releases. The fixes are always active. Connectivity Documentation Corrections Embedded SQL Documentation Bugs · 48379: The Open Client Embedded SQL Reference Manual now indicates that the parameter list for the statement declare cursor for execute stored_procedure must be enclosed in parentheses. The correct syntax is: exec sql declare cursor_name cursor for execute procedure_name ([[@param =]:hv] [,[@param =]:hv]...) · 49230: In the Open Client Embedded SQL Reference Manual, the SET DESCRIPTOR example has been corrected: The lines beginning "value 1," "value 2," and "value 3" were previously inverted. They now read as follows: exec sql prepare get_royalty from "select royalty from roysched where title_id = ? and lorange <= ? and hirange > ?"; exec sql allocate descriptor roy_desc with max 3; exec sql set descriptor roy_desc value 1 data = :tid; exec sql set descriptor roy_desc value 2 data = :sales; exec sql set descriptor roy_desc value 3 data = :sales; exec sql execute get_royalty into :royalty using sql descriptor roy_desc; · 53686: In the Open Client Embedded SQL Reference Manual, a misspelled keyword has been corrected in two commands-declare cursor(dynamic) and declare cursor(static). The correct spelling is "read only" (two words, instead of one). · 57291: In Table 2-3 of the Open Client Embedded SQL Reference Manual, each value under the heading "CLIENT-LIBRARY CODE" has been decremented by one. That is, -8 is now -9; -9 is now -10, and so on. (Note: The values under the heading "CODE" are not affected.) That part of the table is now as follows: SYBASE-DEFINED DATATYPE CLIENT-LIBRARY CODE ----------------------- ------------------- smalldatetime -9 money -10 smallmoney -11 text -3 image -4 tinyint -8 binary -5 varbinary -6 longbinary -7 longchar -2 · 56472: On page 6-17 of the Open Client Embedded SQL/COBOL Guide, the code fragment now reads: exec sql include "myfile" end-exec. The argument "myfile" must be enclosed in quotation marks. · 52189: In the Open Client Embedded SQL/C Guide and Open Client Embedded SQL/COBOL Guide, the cancel statement has been added to the list (p. 1-7) of Embedded SQL statements not supported in System 10. · 59348: In the Open Client Embedded SQL/C Guide and Open Client Embedded SQL/COBOL Guide, the descriptions of the sqlwarn flags (capitalized SQLWARN in COBOL) have been corrected. The sqlwarn (SQLWARN) flags are as follows: sqlwarn Flag, C SQLWARN Flag, Cobol Description _________________________________________________________________ sqlwarn[0] SQLWARN[1] If blank, no warning condition of any kind occurred, and all other sqlwarn flags are blank. If sqlwarn[0](C) or SQLWARN[1](Cobol) is set to "W", one or more warning conditions occurred, and at least one other flag is set to "W". sqlwarn[1] SQLWARN[2] If set to "W", the character string variable that you designated in a fetch statement was too short to store the statement's result data, so the result data was truncated. You designated no indicator variable to receive the original length of the data that was truncated. sqlwarn[2] SQLWARN[3] If set to "W", the input sent to SQL Server contained a NULL value in an illegal context, such as in an expression or as an input value to a table that prohibits null values. sqlwarn[3] SQLWARN[4] The number of columns in a select statement's result set exceeds the number of host variables in the statement's into clause. sqlwarn[4] SQLWARN[5] Reserved. sqlwarn[5] SQLWARN[6] SQL Server generated a conversion error while attempting to execute this statement. sqlwarn[6] SQLWARN[7] Reserved. sqlwarn[7] SQLWARN[8] Reserved. Client-Library Documentation Bugs · 67847: The Open Client Client-Library/C Reference Manual gives the following misleading and incorrect statement in the "Security Features" topics pages, under the heading, "Encrypted Password Security Handshakes": "Most applications are not aware of SQL Server's password encryption because Client-Library automatically handles it." The "Properties" topic page, under "Security Encryption (CS_SEC_ENCRYPTION", will now say: "Applications which require password encryption must enable the CS_SEC_ENCRYPTION connection property. The default is CS_FALSE, which disables password encryption." For most applications, this is all that is required for password encryption-Client-Library has a default handler that performs the encryption expected by SQL Server. Applications in the following categories need to define and install an encryption handler (in addition to setting the property) to perform password encryption: - Open Server gateways - Clients which connect to an Open Server application that performs some customized form of password encryption The code fragment below enables password encryption: CS_BOOL boolval; /* ** Enable password encryption for the connection ** attempt. */ boolval = CS_TRUE; if (ct_con_props(conn, CS_SET, CS_SEC_ENCRYPTION, (CS_VOID *)&boolval, CS_UNUSED,(CS_INT *)NULL) != CS_SUCCEED) { fprintf(stdout, "ct_con_props(SEC_ENCRYPTION) failed. Exiting\n" ); (CS_VOID)ct_con_drop(conn); (CS_VOID)ct_exit(ctx, CS_FORCE_EXIT); (CS_VOID)cs_ctx_drop(ctx); exit(1); } An example gateway encryption handler can be found in the Open Server sample program ctosdemo.c. · 66733: The issue of dynamic SQL vs. stored procedures has been clarified with an example. The Open Client Client-Library/C Reference Manual, in the "Dynamic SQL" topic page section, "Alternatives to Dynamic SQL," will include the following updated information: Dynamic SQL and application stored procedures offer identical functionality except that: - Stored procedures offer no direct means to get a description of input parameter formats. Dynamic SQL allows the application to request a description of prepared statement input from the remote server. - Stored procedures offer no direct means to get a description of their output, unless you execute them and incur any side effects. - Stored procedures are not automatically dropped when their creator logs off the remote server. Prepared dynamic SQL statements created within a SQL Server login session are automatically deallocated at the end of the session. They offer identical functionality in that any prepared dynamic SQL statement can be implemented as a stored procedure which returns the same set of results (plus the procedure return status result). For example, the following dynamic SQL statement queries the pubs2..titles table for books of a certain type in a certain price range: select * from titles where type = ? and price between ? and ? Here, the dynamic SQL statement has dynamic parameter markers (?) for a type value and two price values. An equivalent stored procedure could be created as follows: create proc titles_type_pricerange @type char(12), @price1 money, @price2 money as select * from titles where type = @type and price between @price1 and @price2 When executed with the same input parameter values, the prepared statement and the stored procedure will return the same rows. In addition, the stored procedure will return a return status result. · 60116: Documentation has been added to state that Client-Library cannot be run in asynchronous mode inside an Open Server, only in synchronous or deferred mode. The Open Client Client-Library/C Reference Manual "Properties" topic page, under "Network I/O (CS_NETIO)," and the "Asynchronous Programming" topic page, will include the following information (The Open Server Server-Library/C Reference Manual already notes the fact in the "Gateway Applications" topic page): The CS_NETIO property cannot be set to CS_ASYNC_IO when Client-Library is used with Open Server. Open Server is multi-threaded and the thread scheduler provides concurrency among multiple threads. · 66860: Documentation has been added to show how ct_cursor can be used to execute a stored procedure containing only one statement, a select. It is not possible to get return status. The Open Client Client-Library/C Reference Manual, ct_cursor Reference Page, section "Client-Library Cursor Declare," will include the following information: Here's an example of creating a Client-Library execute cursor. Please note the restrictions listed below the example. The following pseudocode declares an execute cursor on the stored procedure title_cursor_proc: ct_cursor (cmd, CS_CURSOR_DECLARE, "mycursor", CS_NULLTERM, "exec title_cursor_proc", CS_NULLTERM, CS_UNUSED); ct_send(cmd); In this case, the body of the cursor is the text that makes up the stored procedure. The stored procedure text must contain a single select statement only. In the example above, title_cursor_proc could be created as: create proc title_cursor_proc as select * from titles for read only A stored procedure used with an execute cursor must consist of a single select statement. The stored procedure's return status is not available to the client program. Output parameter values are also not available to the client program. · 59290: Documents now clarify that ct_options(CS_OPT_GETDATA) should not be used. The Open Client Client-Library/C Reference Manual, "Options" topic page and ct_options reference page, will include the following updated information: Don't use ct_options(CS_OPT_GETDATA). This option is for Sybase internal use only. · 70692: The Open Client Client-Library/C Reference Manual, "Properties" topic page, section "Cursor Status" (CS_CUR_STATUS) and ct_cmd_props() reference page, will include the following updated information: The CS_CUR_STATUS property value is either: - CS_CURSTAT_NONE to indicate no cursor exists on the command structure. or - A bit-masked status value for an existing cursor. The value is composed of the status bit-masks CS_CURSTAT_CLOSED, CS_CURSTAT_DECLARED, CS_CURSTAT_RDONLY, CS_CURSTAT_UPDATEABLE. The meanings of these are correctly described under "Cursor Status" in the "Properties" topic page. The following code fragment illustrates use of the cursor status property to display the status of the Client-Library cursor (if any) declared on a command structure: #define RETURN_IF(a,b) if (a != CS_SUCCEED)\ { fprintf(stderr, "Error in: %s line %d\n", \ b, __LINE__); return a ;} /* ** cursor_status() -- Print status information about the ** Client-Library cursor (if any) declared on a CS_COMMAND ** structure. ** ** PARAMETERS: ** cmd -- an allocated CS_COMMAND structure. ** ** ** RETURNS ** CS_FAIL if an error occurred. ** CS_SUCCEED if everything went ok. */ CS_RETCODE cursor_status(cmd) CS_COMMAND *cmd; { CS_RETCODE ret; CS_INT cur_status; CS_INT cur_id; CS_CHAR cur_name[CS_MAX_NAME]; CS_CHAR updateability[CS_MAX_NAME]; CS_CHAR status_str[CS_MAX_NAME]; CS_INT outlen; /* ** Get the cursor status property. */ ret = ct_cmd_props(cmd, CS_GET, CS_CUR_STATUS, &cur_status, CS_UNUSED, (CS_INT *) NULL); RETURN_IF(ret, "cursor_status: ct_cmd_props(CUR_STATUS)"); /* ** Is there a cursor? ** Note that CS_CURSTAT_NONE is not a bit mask, but the ** other values are. */ if (cur_status == CS_CURSTAT_NONE) fprintf(stdout, "cursor_status: no cursor on this command structure\n"); else { /* ** A cursor exists, so check its state. Is it ** declared, opened, or closed? */ if ((cur_status & CS_CURSTAT_DECLARED) == CS_CURSTAT_DECLARED) strcpy(status_str, "declared"); if ((cur_status & CS_CURSTAT_OPEN) == CS_CURSTAT_OPEN) strcpy(status_str, "open"); if ((cur_status & CS_CURSTAT_CLOSED) == CS_CURSTAT_CLOSED) strcpy(status_str, "closed"); /* ** Is the cursor updateable or read only? */ if ((cur_status & CS_CURSTAT_RDONLY) == CS_CURSTAT_RDONLY) strcpy(updateability, "read only"); else if ((cur_status & CS_CURSTAT_UPDATABLE) == CS_CURSTAT_UPDATABLE) strcpy(updateability, "updateable"); else updateability[0] = '\0'; /* ** Get the cursor id. */ ret = ct_cmd_props(cmd, CS_GET, CS_CUR_ID, &cur_id, CS_UNUSED, (CS_INT *) NULL); RETURN_IF(ret, "cursor_status: ct_cmd_props(CUR_ID)"); /* ** Get the cursor name. */ ret = ct_cmd_props(cmd, CS_GET, CS_CUR_NAME, cur_name, CS_MAX_NAME, &outlen); RETURN_IF(ret, "cursor_status: ct_cmd_props(CUR_NAME)"); /* ** Null terminate the name. */ if (outlen < CS_MAX_NAME) cur_name[outlen] = '\0'; else RETURN_IF(CS_FAIL, "cursor_status: name too long"); /* Print it all out */ fprintf(stdout, "Cursor '%s' (id %d) is %s and %s.\n", cur_name, cur_id, updateability, status_str); } return CS_SUCCEED; } /* cursor_status */ CS-Library Documentation Bug · 64520: The documentation and examples for callback error handling have been improved. The Common Libraries Reference Manual chapter, "Introducing CS-Library" and the cs_config reference page, will include the following information: Here is a CS-Library error handler: /* ** cslib_err_handler() - CS-Library error handler. ** ** This routine is the CS-Library error handler used by this ** application. It is called by CS-Library whenever an error ** occurs. Here, we simply print the error and return. ** ** Parameters: ** context ** A pointer to the context handle for context ** in which the error occurred. ** error_msg ** The structure containing information about the ** error. ** ** Returns: ** CS_SUCCEED */ CS_RETCODE CS_PUBLIC cslib_err_handler(context, errmsg) CS_CONTEXT *context; CS_CLIENTMSG *errmsg; { /* ** Print the error details. */ fprintf(stdout, "CS-Library error: "); fprintf(stdout, "LAYER = (%ld) ORIGIN = (%ld) ", CS_LAYER(errmsg->msgnumber), CS_ORIGIN(errmsg->msgnumber) ); fprintf(stdout, "SEVERITY = (%ld) NUMBER = (%ld)\n", CS_SEVERITY(errmsg->msgnumber), CS_NUMBER(errmsg->msgnumber) ); fprintf(stdout, "\t%s\n", errmsg->msgstring); /* ** Print any operating system error information. */ if( errmsg->osstringlen > 0 ) { fprintf(stdout, "CS-Library OS error %ld - %s.\n", errmsg->osnumber, errmsg->osstring); } /* ** All done. */ return (CS_SUCCEED); } /* cslib_err_handler */ Here is the code to install it: /* ** Install the function cslib_err_handler as the ** handler for CS-Library errors. */ if (cs_config(context, CS_SET, CS_MESSAGE_CB, (CS_VOID *)cslib_err_handler, CS_UNUSED, NULL) != CS_SUCCEED) { /* Release the context structure. */ (void)cs_ctx_drop(context); fprintf(stdout, "Can't install CS-Lib error handler.\ Exiting.\n"); exit(1); } The error severities are the same as those explained in the Open Client Client-Library/C Reference Manual, on the "Client-Library Messages" topic page. Bulk Library Documentation Bugs · 69977: The documentation has clarified to specify that only one CS_BLKDESC or one CS_COMMAND can be active on a connection at one time. The Common Libraries Reference Manual, blk_init reference page, will say: Multiple CS_BLKDESC and CS_COMMAND structures can be exist on the same connection, but only one CS_BLKDESC or CS_COMMAND structure can be active at once. - A bulk-copy operation begun with blk_init must be completed before the connection can be used for any other operation. - A bulk-copy operation can not be started when the connection is being used to initiate, send, or process the results of other Client-Library or Bulk-Library commands. · 69153: Documentation has been added on how Bulk Library errors are handled. The Common Libraries Reference Manual chapter, "Introducing Bulk Copy", will say: Bulk-Library errors are reported as Client-Library errors for Client-Library bulk-copy routines and as Server-Library errors for Server-Library bulk-copy routines. Server-Library routines are those routines that take a SRV_PROC pointer as an argument. Client applications should handle Bulk-Library errors by installing a Client-Library error handler or handle them inline with ct_diag(). Applications should also install a remote server message handler, because errors detected by the SQL server will be reported as server messages. To handle Bulk-Library errors, Open Server gateway applications should install a Server-Library error handler in addition to the handlers listed above. EBF News The following tables list the latest rollup numbers for SQL Server 4.9.2 and 10.x on selected platforms, and selected bugs fixed in them. Latest Rollups for SQL Server 4.9.2 Platform Rollup Number -------- ------------- SunOS Release 4.x (BSD) 4659 ------------------------------------------------ HP 9000 Series 800 HP-UX 4660 ------------------------------------------------ IBM RISC System/6000 AIX 4661 _________________________________________________________________ AT&T System 3000 UNIX SVR4 MPRAS 4662 _________________________________________________________________ Digital OpenVMS VAX 4663 _________________________________________________________________ Sun Solaris 2.x 4664 _________________________________________________________________ Digital OpenVMS Alpha 1.5 4665 _________________________________________________________________ Digital OpenVMS Alpha 1.0 2121 _________________________________________________________________ Bugs Fixed in Latest Rollup, SQL Server 4.9.2 This list is additive, that is, only the bugs fixed in the latest series of rollups are included. Please see SYBASE Technical News, Volume 4, Number 1, for the list of earlier bugs fixed. Bug Number Description _________________________________________________________________ 68128 Reverse the behavior of 299 trace flag (trigger recompilation of stored procedures) for temporary table schema changes only if command line trace flag 299 is set. 67868 The "dbqual" in DB-Library returns NULL for where clause in browse mode, because SQL Server sends the wrong table number in the TDS stream. This could cause data inconsistency because update/insert might affect the wrong rows due to a NULL where clause. 54376 Subquery in select list raises multiple results instead of raising an error (depending on if this is deemed a legal syntax, docs aren't clear) that either the subquery in the select list is illegal or Error 512 (multiple results). Latest Rollups for SQL Server 10.0.2 Platform Rollup Number -------- ------------- SunOS Release 4.x (BSD) 4009 ------------------------------------------------ HP 9000 Series 800 HP-UX 4010 ------------------------------------------------ IBM RISC System/6000 AIX 4011 _________________________________________________________________ AT&T System 3000 UNIX SVR4 MPRAS 4012 _________________________________________________________________ Digital OpenVMS VAX 4013 _________________________________________________________________ Sun Solaris 2.x 4014 _________________________________________________________________ Digital OpenVMS Alpha 1.5 4015 _________________________________________________________________ Digital OSF/1 4019 _________________________________________________________________ NetWare none _________________________________________________________________ OS/2 none _________________________________________________________________ Windows NT 4996 _________________________________________________________________ Bugs Fixed in Latest Rollup, SQL Server 10.x There has been no change in this list since SYBASE Technical News, Volume 4, Number 1. Q5.6: TECHNICAL NEWS Volume 4, Number 4 October, 1995 _________________________________________________________________ Disclaimer: No express or implied warranty is made by SYBASE or its subsidiaries with regard to any recommendations or information presented in SYBASE Technical News. SYBASE and its subsidiaries hereby disclaim any and all such warranties, including without limitation any implied warranty of merchantability of fitness for a particular purpose. In no event will SYBASE or its subsidiaries be liable for damages of any kind resulting from use of any recommendations or information provided herein, including without limitation loss of profits, loss or inaccuracy of data, or indirect special incidental or consequential damages. Each user assumes the entire risk of acting on or utilizing any item herein including the entire cost of all necessary remedies. Staff Principal Editor: Leigh Ann Hussey Contributing Writers: Sandra Dellafiora, Aimee Grimes, Louise Kirby, Jeff Lichtman, John McVicker, Pat Mullin, Rahul Upadhyaya, Rob Weaver, James White, Elton Wildermuth Send comments and suggestions to: SYBASE Technical News, 6475 Christie Avenue, Emeryville, CA 94608 or technews@sybase.com This issue of SYBASE Technical News contains new information about your SYBASE software. If needed, duplicate this newsletter and distribute it to others in your organization. All issues of SYBASE Technical News and the troubleshooting guides are included on the AnswerBase CD. Send comments to technews@sybase.com. To receive this document by regular email, send name, full internet address and customer ID to technews@sybase.com. _IN THIS ISSUE:_ Tech Support News/Features Technical Support Holiday Schedule for Q4 `95 SQL Server System 11 Subquery Processing Performance Improvements Tuning to Prevent Time-Slice Errors 605 Errors on load database master System Database Recovery in System Understanding sysusages.unreservedpgs Understanding Spinlocks Patches for Hard / Transient 605 Errors on HP Connectivity / Tools / PC Localization Problems and cs_ctx_alloc Failure End-of-Life Plan for APT and DWB Certification and Fixed Bug Reports Mainframe EBF Information Net-Gateway EBF Information Bug 69332: Timeslice Disables Automatic Checkpoint and waitfor Command Technical Support Holiday Schedule for Q4 `95 Sybase Technical Support is open on all holidays and provides full service on many holidays. On the holidays shown below, Technical Support provides limited service. During limited-service holidays, the following Technical Support coverage will be provided: · SupportPlus Preferred and Advantage Customers may log all cases; priority 1 and 2 cases will be worked on over the holiday. · 24x7 and 24x5 Support Customers may log priority 1 (down server) cases which will be worked on over the holiday. · SupportPlus Standard, Desk Top, and Regular Support Customers may purchase Extended-hour Technical Support for coverage over the holiday. Table 1: Sybase Technical Support limited-service holidays - U.S. customers Holiday Date ------- ---- Labor Day September 4 Thanksgiving November 23 Christmas Day December 25 Table 2: Sybase Technical Support limited-service holidays - Canadian customers Holiday Date ------- ---- Labour Day September 4 Canadian Thanksgiving October 9 Christmas Day December 25 Boxing Day December 26 System 11 Subquery Processing Performance Improvements For System 11, the query processor was substantially rewritten, in order to meet the following goals: · Solve subquery performance problems in System 10(TM) · Solve subquery correctness problems in System 10 · Maintain compatibility with existing views and procedures System 10 Performance Issues System 10 used two subquery processing strategies, "inside-out" and "unnest." In the "inside-out" strategy, SQL Server(TM) built the subquery result set first, then joined it with the outer query. In the "unnest" strategy, SQL Server changed the subquery to an existence join. The following performance issues existed as a result of these strategies: · The "inside-out" processing strategy gave good throughput (length of time it takes SQL Server to get to the last row of a query), but poor response time (length of time it takes SQL Server to return the first row of a query). · "Inside-out" processing required two joins: one to form the subquery result and one for the outer result. · If there was a restrictive join in an outer query, the result set for a subquery could be bigger than necessary. · Internal use of the group by all clause forced an extra pass over the grouping table. · A join with a subquery result could not be optimized if the subquery was part of an or clause, because of the resulting cartesian product. System 10 Correctness Issues There were several known problems in System 10 having to do with nulls, empty tables, and duplicates, none of which were easy to fix under the "inside-out" system. Problems Solved by System 11 In System 11, subqueries are now modeled as nested loops, which has the added advantage of matching the ISO/ANSI definition of subqueries. Some unnesting can take place, in the form of flattening to joins and materialization. Unflattened subqueries are treated as expressions in this system. There is no need to build the entire subquery result set up front; subquery results are only calculated when necessary, so that there is no more assembling of subquery results that end up not being used. Likewise, there is no longer a need to join subquery results back to the outer query. All of this results in better response time. A few other features also improve performance: · Quantified predicates no longer use group by all · Subqueries in or clauses no longer use non-optimized joins · Some not in and all subqueries are optimized better Correctness problems also go away with the new system, because the processing closely matches the ANSI-standard SQL definition. Performance Enhancements The new system uses several performance enhancements: · "Short circuiting" In "short circuiting," evaluation of and lists stops early if any part is false, and evaluation of or lists stops early if any part is true. where clauses are automatically shuffled to put the subquery at the end, forestalling unnecessary execution of subqueries. For example, the following query: select a from t where NOT EXISTS (select * from s where b=t.c) AND d=2 is transformed to: select a from t where d=2 AND NOT EXISTS (select * from s where b=t.c) In this case, the subquery will not execute for rows in t where d is not equal to 2. · Better flattening · More efficient materialization · Caching An in-memory cache is used for subqueries that are not flattened or materialized. This cache stores the result values for previous executions of the subquery; values in the correlation columns of subsequent executions are used as look-up keys for the cache. The cache starts off big and shrinks to the appropriate size depending on the cache hit rate. Caching improves performance in cases where there are duplicates in correlation columns, especially when the values are ordered. · Elimination of sort for distinct expression queries In System 10, select distinct subqueries forced SQL Server to perform a sort step to eliminate duplicates; in the new system, no such step is used. Instead, SQL Server uses a special system-internal aggregate, once unique, to raise an error if more than one distinct value is encountered. Eliminating the sort step saves time. View materialization has been made more efficient for views containing aggregates. Tidbits and New Features The following limitations have been removed: · distinct and group by not allowed in same query · Correlation column and noncorrelation column not allowed on the same side of an equal sign. showplan output has been made more readable and has been enhanced to show: · Line numbers to help locate statements in query text · Subquery numbers and nesting levels for each subquery's plan · Type of subquery · Where each subquery is nested · Types of scans · Index coverage · Index keys used to limit scan · Worktable numbers · Clustered index name The sort step for distinct will no longer be necessary in some cases where there is an index on the selected columns. Effects for the User Some subqueries, in and exists for example, will continue to perform as they have for you in the past. On the other hand, some subqueries, especially not in, not exists, all, and subqueries in or clauses, will perform considerably better. For almost all non-flattened subqueries, you will see improved response time. One final advantage to the new system is that it leaves more leeway for future performance improvements. SYBASE Technical News will keep you informed of further progress in this area as it is made. Tuning to Prevent Time-Slice Errors If you have stored procedures that take a long time to complete, you may have encountered the time- slice error, where SQL Server kills the running process, sometimes shutting down with a stack trace in the error log. In order to prevent this from happening, you can tune certain SQL Server configuration parameters, but there are risks involved with each along with the benefits. Understanding Timeslice Errors To make the most informed decision on configuration tuning, first you have to understand what causes timeslice errors. Each task on your system gets a fixed period of CPU time, called a time slice. During this timeslice, the task is expected to complete a "unit" of work and then yield the CPU. Sometimes, the task will not complete all its work in one time slice; in this case, if the task does not volunteer to be put to sleep so that another task can run, SQL Server will allot it one more amount of time to complete. SQL Server is non-preemptive, which means that if a process fails to complete, even with the extra time allotted, SQL Server will simply kill the process rather than putting it on hold to give another process to the CPU. This is the time-slice error. In SQL Server releases 4.9.x and 10.x, the time-slice value is specified by the buildmaster parameter ctimeslice and the extra time value by the parameter ctimemax; in System 11, the time slice and extra time values are part of the flat configuration file mentioned elsewhere in this issue. When Timeslice Causes SQL Server Crash There are hard-coded yields written into some of SQL Server's own most CPU-consuming functions. This gives the scheduler a chance to reset and give the process another chunk of time once the process queue is empty. If the queue is not empty, a process with a yield coded into it will yield voluntarily and SQL Server will not kill it. If there is no yield coded into a SQL Server function holding a spinlock, SQL Server will write the kernel error "timeslice -201" to the error log and kill itself, resulting in a server crash with stack trace. Sybase Engineering can sometimes use this stack trace to determine where in the dataserver code it would be useful to insert a yield statement. To help Engineering in this process, if you are getting SQL Server crashing with time-slice errors, you should collect several such stack traces so that they can better pinpoint the best yield location. Time-Slice Errors on User Applications More commonly, it is a user application, stored procedure, or query that gets the time-slice error. In this case, SQL Server kills the offending process but does not crash. Reading the Timeslice Error Message While a task is running, the scheduler counts down the amount of time that the process has been running in ticks. The number of milliseconds represented by each tick is platform-dependent; you can check how many milliseconds SQL Server thinks a tick is by entering the command select @@timeticks. In SQL Server releases prior to and including 10.x, the buildmaster parameter cclkrate defines this number. On most systems, the value for cclkrate is rounded to the nearest 10,000. On some systems, the value is rounded to the nearest 1,000,000, or one second. Both ctimeslice and ctimemax (and their equivalent entries in the System 11 configuration file) are defined in ticks. A value of 1 for ctimeslice is equivalent to 1 tick. If cclkrate is set to 100,000 ctimeslice , this means that a process will get a timeslice of 1/tenth of second. If ctimemax is set to 200 ticks, the process will then get another 20 seconds of time on the CPU (100,000 milliseconds * 200 = 20 seconds). In the time-slice error message,you will see a negative number, for example: 00:95/07/16 01:54:08.94 kernel timeslice -3601, current process infected This is the total number of ticks that the process accumulated on the CPU during this one "trip" before being terminated (ctimeslice + ctimemax). When this combined value is exceeded, the timeslice error is generated, indicating that SQL Server has infected the process preparatory to killing it. Parameter Tuning From the above information, you may have deduced that if you increase ctimeslice and ctimemax, you will experience fewer time-slice errors. This is, in fact, the case, but there are two risks that you must take into account before you decide to increase those values: · Increasing ctimeslice and ctimemax means that a process will run for the full time allotted without yielding, and will hold up other processes. The result is that, without any message to the user, SQL Server will come to an apparent halt for the full span of ctimeslice + ctimemax. · More seriously, increasing ctimeslice and ctimemax to fend off time-slice errors may mask a deeper SQL Server problem. Bearing these in mind, then, the way to adjust the two parameters is as follows: For release 10.x and earlier, the buildmaster commands are: buildmaster -yctimeslice=value_in_ticks buildmaster -yctimemax=value_in_ticks When System 11 is released, these will be the commands: sp_configure "time slice", value_in_milliseconds sp_configure "cpu grace time", value_in_milliseconds Note that unlike the earlier parameters, these are valued in milliseconds rather than ticks. You must have the "sa" role to change these parameters. It is theoretically possible to set ctimemax to some very high value, say 1,000, but you must decide for yourself if the benefits of having no timeslice errors outweigh the possible performance degradation. We recommend that you leave these values at the default unless you are getting time-slice errors. If you are seeing these sorts of problems, you can tinker with the timeslice and grace time (timemax) values to find the best balance of performance and timeslice error avoidance for your site. 605 Errors on load database master When you attempt to load a dump of your master database and get a 605 error, check the sysusages table. It may be that the sysusages entries for master are different between the dump and the database into which you are loading. Overview If sysusages.vstart values don't match between the dump and the recieving database, SQL Server will raise a 605 error. You can either reconstruct sysusages through a sequence of create database and alter database commands, or you can follow the process outlined in the "System Database Recovery" chapter of the SQL Server Troubleshooting Guide for rebuilding the master database when no valid dump exists. Technical Details For example, say that the command select * from sysusages on your SQL Server gives the following output: dbid segmap lstart size vstart pad unreservedpgs _________________________________________________________________ 1 7 0 1536 4 NULL 0 1 7 1536 1024 7172 NULL 0 2 7 0 1024 2564 NULL 672 2 4 1024 512 134217728 NULL 512 2 7 1536 512 117441024 NULL 512 3 7 0 1024 1540 NULL 664 3 3 1024 512 117440512 NULL 512 4 7 0 6144 16777216 NULL 1528 5 7 0 5120 33554432 NULL NULL 5 4 5120 2560 50331648 NULL NULL 6 7 0 1024 3588 NULL 648 7 7 0 1024 4612 NULL 528 8 7 0 1024 67108864 NULL 0 8 3 1024 3072 67109888 NULL 0 8 4 2560 1024 83886080 NULL 0 9 7 0 1536 100663296 NULL 1176 10 7 0 1536 5636 NULL 1184 11 7 0 1536 100664832 NULL 1176 (18 rows affected) In the above output, lstart is the starting location of the logical page. Logical pages in the database are numbered starting at 0, in an unbroken stream. They are mapped to actual storage through the vstart values; virtual pages are coded structures containing the device number and the page number offset on the device. The mapping occurs like this: 1. SQL Server, looking for a particular page number (for example, 2000), selects the lstart closest to but not greater than that number (say, 1536). 2. SQL Server subtracts the selected lstart number from the number of the page it is looking for; this is how far away that page is from the start of the disk piece where it is stored. 3. SQL Server adds that difference to vstart, then masks off the device number encoded in vstart. The resulting value is the page offset from the actual starting address on the device. 4. Finally, SQL Server multiplies that offset by the size of a Sybase page to get the location where it should actually start reading. The device number, as already mentioned, is encoded in vstart; dividing vstart by 2**24 (2 to the power of 24) gives the device number. You can use the following query to look at all objects on the master device (device number 0): 1> select * from sysusages 2> where vstart/power(2,24) = 0 3> order by vstart 4> go Here is an example of output from that query: dbid segmap lstart size vstart pad unreservedpgs _________________________________________________________________ 1 7 0 1536 4 NULL 208 3 7 0 1024 1540 NULL 664 2 7 0 1024 2564 NULL 664 6 7 0 1024 3588 NULL 648 7 7 0 1024 4612 NULL 528 10 7 0 1536 5636 NULL 1184 1 7 1536 1024 7172 NULL 1024 (7 rows affected) sysusages shows that usage on the master device is arranged as shown in Figure 1.: vstart 0 4 1540 2564 3588 4612 5636 7172 819 6 size | | 3mb | 2mb | 2mb | 2mb | 2mb | 3mb | 2mb | ..................................................................... dbid | | 1 | 3 | 2 | 6 | 7 | 10 | 1 | | | | | | | | | | lstart | | 0 - 1535 |0-1023 |0-1023 |0-1023 |0-1023 | 0 - 1535 |1536- | | | | | | | | |2559 | |...|...........|.......|.......|.......|.......|..........|.......|. device fragment 1 2 3 4 5 6 7 8 lstart = logical page number Fig. 1 In the master database (dbid = 1), whenever a page is requested, for example (logical page number) 1536, sysusages is used to convert this page number's lstart into the corresponding vstart. 1> select vstart = vstart + 1536 - lstart 2> from sysusages 3> where dbid = 1 4> and lstart <= 1536 5> and size > 1536 - lstart 6> go vstart ----------- 7172 Then SQL Server accesses the next 2048 bytes, starting at vstart=7172, considering them as logical page 1536. When you run dump database master, all allocated pages between lstart * 0 through 2559 or between vstart = 4 through 1539 and 7172 through 8195 are copied to the dump device. Suppose that after the dump you execute buildmaster to create new master, model, and tempdb databases. sysusages now looks like this: 1> select * from sysusages 2> where vstart/power(2,24) = 0 3> order by vstart 4> go dbid segmap lstart size vstart pad unreservedpgs ------ ---------- --------- --------- ----------- ------ ------------- 1 7 0 1536 4 NULL 208 3 7 0 1024 1540 NULL 664 2 7 0 1024 2564 NULL 664 The layout for the master device is shown in the following figure. vstart 0 4 1540 2564 3588 size | | 3mb | 2mb | 2mb | ..................................................................... dbid | | 1 | 3 | 2 | | | | | | lstart | | 0 - 1535 |0-1023 |0-1023 | | | | | | |...|...........|.......|.......|.................................... Fig. 2 Figure 3 shows the layout after you an alter database master to add 2MB.: vstart 0 4 1540 2564 3588 4612 8196 size | | 3mb | 2mb | 2mb | 2mb | ..................................................................... dbid | | 1 | 3 | 2 | 1 | | | | | | | lstart | | 0 - 1535 |0-1023 |0-1023 |1536- | | | | | |2559 | |...|...........|.......|.......|.......|............................ Fig. 3 Now, vstart for logical page 1536 for the master database is 3588, as shown in the following: 1> select vstart = vstart + 1536 - lstart 2> from sysusages 3> where dbid=1 4> and lstart <= 1536 5> and size > 1536 - lstart 6> go vstart _________________________________________________________________ 3588 When you load the dump you made earlier onto master, all the pages for master are copied from the dump device to the master device in the correct place. However, at the same time, whatever was in sysusages before this load is overwritten by what is in the dump. This means that the master device is now laid out as shown in Figure 3, but sysusages shows it to be as shown in Figure 1. Therefore, immediately after the load, vstart for logical page 1536 for master is 7172 which is incorrect as is clear from Figure 3. This raises a 605 error as the load completes and recovery occurs: Attempt to fetch logical page 1536 in database `master' belongs to object id `0', not to object `'. SQL Server consequently shuts down. Action You can confirm that the source of your 605 error is indeed the wrong vstart for a logical page in master by booting SQL Server with trace flag 3607. This brings up SQL Server without recovering any databases. Edit the dataserver command line in your runserver file to add the trace flag. · In UNIX, the file is usually $SYBASE/install/RUN_SVR where SVR is your SQL Server name. Add the option -T3607 to the dataserver command line in your runserver file, for example: $SYBASE/bin/dataserver -T3607 other_qualifiers Be aware that the flag is invoked with -T (uppercase), not -t (lowercase). · In OpenVMS, the file is usually SYBASE_SYSTEM:[SYBASE.INSTALL]RUN_servername.COM. Add the option /trace=3607 to the dataserver command line in your runserver file, for example: $ server :== $sybase_system:[sybase.bin]dataserver.exe $ server /trace=3607 /other_qualifiers Look at sysusages, and run this query to find vstart for the offending page, in this case page 1536: 1> select vstart = vstart + 1536 - lstart 2> from sysusages 3> where dbid=1 4> and lstart <= 1536 5> and size > 1536 - lstart 6> go vstart ----------- 7172 In this case, page 1536 for master was copied at the correct place, but when SQL Server tries to access it, it calculates a wrong vstart based on the incorrect value in sysusages and accesses 2048 bytes from the wrong location on the master device. This causes SQL Server to raise the 605 error. The best solution is to construct a sequence of create database and alter database commands that create sysusages as it appears after you load your dump of master. Based on the original output: 1> select * from sysusages 2> where vstart/power(2,24) = 0 3> order by vstart 4> go dbid segmap lstart size vstart pad unreservedpgs ------ ---------- --------- --------- ----------- ------ ------------- 1 7 0 1536 4 NULL 208 3 7 0 1024 1540 NULL 664 2 7 0 1024 2564 NULL 664 6 7 0 1024 3588 NULL 648 7 7 0 1024 4612 NULL 528 10 7 0 1536 5636 NULL 1184 1 7 1536 1024 7172 NULL 1024 (7 rows affected) you can recreate sysusages by following these steps: 1. Run buildmaster to get the first three rows in sysusages. 2. Create the fourth row by creating a database of 1024 pages. 3. Create the fifth row by creating a database of 1024 pages. 4. Create the sixth row by creating a database of 1536 pages. 5. Finally, alter master by 1024 pages to create the seventh row in sysusages. As mentioned at the beginning of this article, instead of re-creating sysusages, you can follow the process outlined in the "System Database Recovery" chapter of the SQL Server Troubleshooting Guide for rebuilding the master database when no valid dump exists. The fact that load database master generates 605 errors if sysusages.vstart values don't match has been registered under bug number 45426, which has not yet been fixed. System Database Recovery in System 10 This article supplements two sections in the "System Database Recovery" chapter of the SQL Server Troubleshooting Guide: "Master Device Completely Lost" and "Master Database Corrupt and SQL Server Will Not Start." Master Device Completely Lost If a Valid Dump of the master Database Does Not Exist For 10.0 and Later SQL Servers Steps 1 through 5 are fine, but step 6 should read: Execute sp_helpdb sybsystemprocs to make sure that the system database sybsystemprocs is present. If so, then proceed to step 7. If it is not, rebuild sybsystemprocs as follows: 1. Execute disk init to initialize a new device for sybsystemprocs. If you are running 10.0, the device size should be 10MB. If you are running 10.0.2, the device size should be 12MB. If you running System 11, the device size should be 16MB, unless you are on Digital UNIX (OSF), in which case the size should be 19MB. 2. Execute create database sybsystemprocs, with the sizes as above. 3. Run installmaster by executing the command isql -Usa -Ppassword < installmaster. master Database Corrupt and SQL Server Will Not Start If a Valid Dump of the master Database Exists For 10.0 and Later SQL Servers Take out step 10. There is no reason to run installmaster if you are recovering with dumps-everything will be there. If a Valid Dump of the master Database Does Not Exist For 10.0 and Later SQL Servers Steps 1 through 4 are fine, but step 5 should read: Execute sp_helpdb sybsystemprocs to make sure that the system database sybsystemprocs is present. If so, then proceed to step 7. If it is not, rebuild sybsystemprocs as above. Understanding sysusages.unreservedpgs Question How does SQL Server use the column unreservedpgs in the sysusages table and when is it updated? Answer unreservedpgs contains the count of unreserved pages for that disk piece during the time that the database is "closed". (A database is "closed" when SQL Server has no in-memory structure describing it. If SQL Server does have in-memory structures, it uses them in preference to the sysusages row because access is thousands of times faster.) sysusages.unreservedpgs is updated in these circumstances: · When the database is automatically checkpointed · When the database changes state from "open" to "closed" Note that the update only occurs when the database is automatically checkpointed-a user issuing the checkpoint command won't cause it to happen; it must be the system doing it. Because this column is only updated occasionally, it is inaccurate most of the time. Sybase provides a built in function for the purpose of getting the accurate number; it should be used like this: select curunreservedpgs(dbid, lstart, unreservedpgs) from master.dbo.sysusages [ other clauses ] NOTE: This function is used by stored procedures such as sp_helpdb and sp_helpsegment to display space usage. This information applies only to releases 10.x and later. Understanding Spinlocks Spinlock Defined The lowest level of synchronization in a multiprocessor SQL Server is the spinlock. SQL Server uses the spinlock to synchronize access to the critical data used to control semaphores, page locks, and other thread management tasks. NOTE: Spinlocks and semaphores are only used in multiprocessor servers, since otherwise a process is not allowed to schedule out (or yield) while holding one. In general, a spinlock works like this: 1. A thread or engine obtains a spinlock before attempting to change data. 2. While that thread or engine has the spinlock, if another thread or engine attempts to obtain the same spinlock, that thread "spins" on the lock until the lock is released. 3. Once it is safe for others to access or change the data, the original thread releases the spinlock. A spinlock might perform code protection or data protection. For example, SQL Server always uses one particular spinlock for modifying a particular object. A spinlock protecting code protects only a very small fragment of the code, allowing it to be executed only in a single-threaded fashion. Spinlocks are the most basic form of locking protocol, usually requiring hardware support, an atomic instruction across the bus. Spinlocks are used in favor of more complex semaphores or page style locks to control access to frequently changed data structures because they are so basic; it doesn't take long to acquire a spinlock. Spinlocks and Server Panic SQL Server "panics" when a thread dies holding a spinlock because the data structure protected by the spinlock is in an unknown state. Other threads cannot know how far the first thread got in changing the data structure before it died. For example, suppose SQL Server needs to delete an object, b, from a doubly linked list: a<->b<->c SQL Server grabs a spinlock and proceeds toward this state: a<->c Suppose, then, that there is a crash in the middle of the operation. The states of the queue and the links are unknown. a??b??c It's hard either to undo what the crashed task was doing or to complete it. The safest strategy is to shut down. This is because if SQL Server continues to run with the lock held, eventually all of the other threads will block on the held lock and the server will hang. An orderly shutdown is preferable. A typical spinlock error in the error log will look like the following: 01:95/07/25 11:22:53.04 kernel timeslice -1501, current process infected /* Here the lock-holding process dies */ 01:95/07/25 11:22:53.11 kernel ************************************ 01:95/07/25 11:22:53.13 kernel SQL causing error : execute master..lp_catch_wcid_user 01:95/07/25 11:22:53.13 kernel curdb = 1 pstat = 0x10100 lasterror = 0 01:95/07/25 11:22:53.13 kernel preverror = 0 transtate = 1 01:95/07/25 11:22:53.13 kernel curcmd = 197 program = isql 01:95/07/25 11:22:53.15 kernel Spinlocks held by kpid 21626922 01:95/07/25 11:22:53.15 kernel Spinlock Resource->rbufmgr_spin at address 102c6700 owned by 14a002a 01:95/07/25 11:22:53.15 kernel End of spinlock display. /* Here SQL Server recognizes that the process was holding a spinlock */ 01:95/07/25 11:22:53.16 kernel Begin BackTrace ..... 01:95/07/25 11:22:53.17 kernel pc: 0x6b0368 os_backtrace+0xf8(0x6f20756c, 0x6c6f636b) 01:95/07/25 11:22:53.20 kernel pc: 0x69f060 ucbacktrace+0xa0(0x2e65d, 0x1) 01:95/07/25 11:22:53.20 kernel pc: 0x4344cc terminate_process+0x218(0x0, 0xffffffff) 01:95/07/25 11:22:53.20 kernel pc: 0x6a7884 kaclkintr+0x2ec(0x1c, 0x0) 01:95/07/25 11:22:53.22 kernel pc: 0xfb16a88 etext+0xf4417f8(0x1056c6e8, 0xfb1bf98) 01:95/07/25 11:22:53.22 kernel end of stack trace, spid 9, kpid 21626922, suid 1 01:95/07/25 11:22:53.22 kernel ueshutdown: exiting /* Finally, SQL Server shuts down */ Patches for Hard / Transient 605 Errors on HP Customers running SQL Server under HP-UX 9.04, who have encountered hard and transient 605 errors, should call the HP Response Center for the following patches: · PHKL_3658 · PHKL_3981 · PHKL_4117 · PHKL_4316 · PHKL_4732 · PHKL_4764 · PHKL_4890 · PHKL_5139 <- Includes Rollup of Async I/O patches · PHKL_5356 <-| · PHKL_5394 · PHKL_5398 <- LVM and bad block relocation fixes · PHKL_5532 <-| · PHKL_5576 Verify with HP to make sure that you need these, but if you have fast/wide SCSI and the A3232A or A3231A arrays, this is what you will need at least to get your system stable. Localization Problems and cs_ctx_alloc Failure This article describes common problems when using Open Client(TM) in Windows that can lead to the generic message "cs_ctx_alloc failure," and the steps for correcting the problem. This is especially critical with any customers using the internationalization features of Windows itself. Open Client is based on a set of dynamic libraries, which take a modular approach to messaging and internationalization. One of the most confusing messages for both Technical Support and the customer is the "cs_ctx_alloc failure." This message means that something happened when the client libraries were trying to allocate the context for a connection. There are a number of things that can cause this message: 1. Improper setup of the Sybase environment. 2. Missing or corrupted localization files. 3. Resource allocation problems. 4. Actual network problems. In the first case, the problem can be as simple as the failure to have run the wsybset.bat batch file before entering Windows. It is important to make sure that the $SYBASE system variable (at a minimum) is set to point to the directory where the Open Client product is installed. The location of DLLs (dynamic link libraries) is also important, since failure to load a DLL may cause a misleading error message like the cs_ctx_alloc error. Make sure that any DLLs (such as network software, and Open Client) are located along the $PATH, or in the Windows system directory. Localization is done during context allocation, so if the localization files are missing or corrupted, the context allocation will fail. This could be due to improper setup of the Sybase environment (described above), an actual installation problem, disk file corruption, or a conflict between the PC's localization and that used by Sybase. The localization routines look at the $SYBASE/LOCALES directory for the file LOCALES.DAT. This file tells the localization routines which files contain which messages for each character set and language. It also looks for a locale name by searching the environment for the $LC_ALL environment variable. If this environment variable is not defined, it looks for the $LANG variable. If neither of those are defined, it looks for the Language setting in the WIN.INI file. If none of these are set, it uses the locale name of "default". Whatever the value is determined to be is then searched for in the LOCALES.DAT file to determine which language and character set are associated with it. This seems to be the most common problem, and is usually solved by adding the following line to either the AUTOEXEC.BAT file or the WSYBSET.BAT file: set LANG=enu The righthand side of the equal sign must be in lower case. In Europe, "set LANG=enu" in WSYBSET.BAT usually does the trick. The "cs_ctx_alloc failure" occurs when the locale name that has been determined does not exist in LOCALES.DAT. See Chapter 2 of the Open Client/Server Supplement for Windows for more information on localization. The other failures can normally be determined by examining the contents of the SYBINIT.ERR file located in either your root directory, or the directory of the executable which caused the allocation failure. It is very often a problem with the configuration of the network software, which can require modification of that software's initialization files. End-of-Life Plan for APT and DWB Sybase supports many different products on many different operating systems and environments. Over time, we review the ongoing investments required to continue supporting each product. We analyze market trends, our customers' requirements as indicated in customer surveys, buying trends, and continued use of Sybase products. We also review our overall product strategy to ensure that we are providing the right mix of products to our customers. Over the last few years, the trend in the market has been away from character-based development tools such as APT Workbench(TM) and Data Workbench®. We have seen demand for these products diminish as customers turn to graphical-based tools. As a result of our review and analysis, we have decided to end enhancement and support for Sybase's APT, Data Workbench, and Report-Execute(TM) products. Low demand for these products no longer justifies the investment in new product engineering. To help Sybase customers continue to support their applications based upon these products and to transition to alternative development environments, Sybase will provide the following end-of-life plan: 1. Sybase will continue to provide full product support for five (5) years for APT, Data Workbench and Report Execute to customers who have purchased support contracts for these products. Although we will not be enhancing the products to take advantage of new features of SQL Server, we will ensure upward compatibility with future SQL Server releases. On June 30, 2000, we will stop accepting support calls on all APT, Data Workbench and Report-Execute products and will no longer provide any software changes. 2. To our APT Workbench and Data Workbench customers with active support agreements, we are offering a special promotion. You may receive 50 percent off the purchase price of any Sybase SQL Server or Powersoft® product, including PowerBuilder®, up to the value of your APT Workbench and Data Workbench active licenses (based upon current list price). When you no longer require support for your APT, Data Workbench or Report-Execute products, you may request this discount by calling 1-800-685-8225. The Sybase representative will determine your discount amount, and apply that amount towards the purchase of other Sybase products. At the time you exercise this discount option, support for your APT and Data Workbench products used in the calculation of the discount will be terminated. This promotional offer will remain in effect until June 30, 2000, and applies only to APT Workbench and Data Workbench licenses that (a) are covered by active support agreements at the time you take advantage of the promotional offer and (b) were purchased (or deployed against a secondary copy pool) prior to June 30, 1995. 3. We understand that you have applications that you have developed using APT and may need to add additional licenses as you deploy these applications. You will be able to purchase additional licenses for all APT, Data Workbench and Report-Execute products during this end-of-life period. However, support for these new licenses will end on June 30, 2000, regardless of the date of purchase. The specific products included in this end-of-support notice are: · APT Workbench · APT-Execute(TM) · Data Workbench (which includes Report Workbench) · Report Execute · PC APT 32-bit Deployment Toolkit · all APT-Libraries (for a variety of languages) · APT Translator for Motif (previously obsoleted) · APT Display for Motif (previously obsoleted) Note: This end-of-life plan does not affect the Japanese codeline: 5.1.x. We do hope that the extended period of time for support of these products will help you in the transition from APT, Data Workbench and Report-Execute. If you would like to take advantage of the promotional offer described above, please call your local Sybase support group or 1-800-685-8225 (U.S. and Canada) and a Sybase representative will be glad to help you. The current releases of the toolset products (APT-Execute, APT Workbench, Data Workbench and Report-Execute), versions 5.2.2 and 5.2.3 have been available for well over a year, and Sybase is in the process of introducing a new release, version 5.3, during Q2 and Q3 of 1995. With the availability of releases 5.2.2, 5.2.3, and 5.3 of the toolset products, and the limited number of customers using the older versions, Sybase can no longer provide support for older versions. Effective June 30, 1995, support has been discontinued for all releases of these products that precede version 5.2.2. When planning for the current releases of the toolset products, Sybase reviewed the demand for these products on all supported platforms. For a limited number of hardware platforms, demand has declined to less than 10 customers. For this reason, Sybase has no plans to provide release 5.2.2, 5.2.3 or 5.3 on the platforms listed below. Support of the last release (4.x and 5.0) of these products provided for these platforms will continue until June 30, 1996 and will not be available after that date. Customers are encouraged to transfer their licenses that are covered by current support agreements to a platform for which release 5.3 is provided. Support for release 5.3 will continue until June 30, 2000. This table shows the combinations of platform and release that will be supported under current contracts until June 30, 1996: Platform Operating System APT/DWB Release -------- ---------------- --------------- 386 PC UNIX OSF 4.0.3 AT&T 3B2 UNIX 5.3.2 4.0.0 Bull DPX BOS 1.0 4.0.3 IBM RT AIX 2.2.1 4.0.0 Stratus XA/2000 VOS 9.0R 4.0.2 Stratus XA/R I860 VOS 11.5 5.0.0 If you have any questions, please call your local Sybase support group or our US corporate offices at (510) 922-3500 or 1-800-8-SYBASE (U.S. and Canada). Mainframe EBF Information Here is the latest list of all AVAILABLE rollup Mainframe Access Products EBFs as of 8/23/95. This list replaces any previous list sent. Be sure to read the important notes at the end, especially note 1: Table 3: Mainframe Access Products EBFs Catalog version 2.x version 3.0 MAP Product Number EBF number EBF number ----------- ------- --------- ---------- Open Client/CICS/LU6.2 19320 4686 n/a Open Server/CICS/LU6.2 19300 2218 3528 Open Server/CICS/TCP_IP 19350 n/a 3529 Open Server/IMS/LU6.2 19315 n/a 4983 OSAM/DB2/CICS/LU6.2 19311 1971 4899 OSAM/DB2/CICS/TCP_IP 19360 n/a 4608 OSAM/DB2/IMS/LU6.2 19370 n/a 4609 Important Notes Please note that these Mainframe EBFs ship automatically as part of any new product order. 1. EBFs 4983 and 4609 will allow you to run Open Server¿ and OSAM/DB2 under native MVS, with no transaction manager. It is packaged as part of the IMS products, not as a standalone. We recommend that you run the Open Server API product as read only, unless you wish to code your own commit and rollback logic. The OmniSQL Access Module(TM) for DB2 can be read/write. For installation, it is a separate download file, with different names than IMS, and nothing shared with IMS. You need not actually have IMS to use it, but it must be ordered via the IMS products (19315 &/or 19370). 2. All MAP(TM) EBFs now support RPC parameters >32k but <64k, without needing any of the latest Net-Gateway(TM) EBFs. 3. All MAP EBFs now support RPC parameters >64K, if you also have the latest Net-Gateway EBF 3516 or higher, depending on your platform, and additionally start up Net-Gateway with the new option -E to enable this support. The OS/2 GA already has this support built in. 4. Open Server for CICS 1.0 applications can run with the 3.0 stub without having to relink. 5. EBFs 4899, 4608, and 4609 fix failed rollbacks in long running transactions when a quit or disconnect occurs. All customers doing replication into DB2 will need this EBF. Rollups from prior EBFs cure batched SQL (multiple input lines separated by semicolons ; ) problems encountered when a comma is part of the input line, such as found in names like O'Malley. This behavior is not fixed on 2.0 codeline on top of 1971 (and there are no plans to), though a 1-off based on 2.0 GA was done (4607). 6. All Trinzic (InfoHub) customers need the latest Net-Gateway and OS/CICS EBF. 7. Some Open Client/CICS customers using the latest EBF will need the latest Net-Gateway EBF for their platform if they are using SQL Server names longer than 8 characters with OC/CICS (bug 57967) or are re-linking their applications with the new stub that contains the memory leak fix (bug 56162). Check the Net-Gateway cover letter to be sure the fix has been ported for your Net-Gateway platform. Net-Gateway EBF Information Here is the latest list of all rollup Net-Gateway EBFs which will become available during August and throughout September and early October 1995. This list replaces any previous lists sent: Table 4: Net-Gateway EBFs Net-Gateway Catalog version 2.x version 3.0 Platform Number EBF number EBF number ----------- ------- ----------- ----------- HP9000/800 17700 n/a 5361 (see note 4) OS/2 16230 5420 4836 RS6000 12670 5419 5401 (see note 5) Solaris 12950 n/a 5402 (see note 6) SunOS 18600 5365 (see note 7) 5400 (see note 7) Important Notes 1. A version 2.x Net-Gateway can run with a CICS LU6.2 MAP product at 3.0 or 10.1 version level, but it cannot take advantage of any new functionality. For migration purposes, always update the mainframe code first, then Net-Gateway second. 2. All Trinzic (InfoHub) customers need a 3.0 Net-Gateway EBF plus the latest MAP 3.0 EBF before using the product (that is, GA MAP plus latest valid mainframe EBF). 3. Some Open Client/CICS customers using the latest EBF 4686 will need a Net-Gateway EBF if they use long SQL Server name lengths (bug 57967) or re-link their applications with the stub fixed for the memory leak problem (bug 56162); check the EBF cover letter to verify your platform has the fix ported. 4. This EBF is built on HP-SNAplus patch PHNE_5329. All customers are required to install this HP patch in order to use _EBF 5361._ 5. This EBF is built on AIX 3.2.3 and SNA Server. EBF 5401 also contains support for IBM's Password Expiration Manager. 6. This EBF is built on Solaris 2.2 and Sunlink Peer-to-Peer 8.0, with patches 101524-01, 102146-02 and 102147-01 applied. 7. All SunOS customers must apply base patch 100916-06, plus 100916-07 by contacting Sun Engineering. General Information The new 3.0 Net-Gateway EBFs include the following features: · Memory leaks fixed. · Group password is no longer displayed. · new -C Net-Gateway start-up option to roll lowercase userids and passwords into uppercase before sending to mainframe. · RPCs >32K supported. · RPCs >64K supported using new -E Net-Gateway start-up option and MAP EBF 3525 or higher. The new 2.0 Net-Gateway EBFs include the following features: · Group password is no longer displayed. · New -C Net-Gateway start-up option to roll lowercase userids and passwords into uppercase before sending to mainframe. · sygc gateway control program broken by previous 2.0 EBFs fixed Bug 69332: Timeslice Disables Automatic Checkpoint and waitfor Command This bug is specific to IBM RS6000. Sybase Technical Support recently encountered a case where a customer's tempdb database was filling up. The customer was able to run a manual dump transaction in tempdb, indicating that there were no open transactions. Further investigation revealed that the automatic checkpoint process was not dumping tempdb, even though the process was visible through sp_who. The customer also noticed that the waitfor command no longer worked. The problem turned out to be a large stored procedure that tended to raise a time slice error. Further testing showed that when any process on the customer's server suffered a time slice error, the waitfor command and automatic checkpoint process no longer functioned until the customer rebooted SQL Server. If you get a time slice error in your errorlog and subsequently get hung processes or notice that the waitfor command and automatic checkpoint no longer function, you may call Sybase Technical Support to get a one-off EBF containing the fix for bug 69332. The fix for this bug has not yet been incorporated into a Rollup, but will be in the future. Q5.7: TECHNICAL NEWS Volume 5, Number 1 February, 1996 _________________________________________________________________ SYBASE Technical News contains new information about your Sybase software. If needed, duplicate this newsletter and distribute it to others in your organization. All issues of SYBASE Technical News and the troubleshooting guides are included on the AnswerBase CD. Send comments to technews@sybase.com To receive this document by regular email, send name, full internet address and customer ID to technews@sybase.com IN THIS ISSUE Tech Support News/Features * 1996 Technical Support North American Holiday Schedule SQL Server 11.0 * Database "Online" Questions * Dump Compatibility Issues * Lock Promotion Changes in SQL Server 11.0 * Changes to Thresholds Behavior in SQL Server 11.0 SQL Server General * Threshold Process Fails to Fire * Maximum Timestamp Value * Dumping Multiple Products to One Tape * Dump Transaction and Error 4207 * Trace Flags 322 and 323 in 4.9.x & 10.x * Installing Async I/O on HP-UX 10.0 via SAM * Ultrix: 4.4 vs. 4.3a Compatibility Connectivity / Tools / PC * Latest Rollups for PC Platforms Certification and Bug Reports * Latest Rollups for SQL Server 1996 TECHNICAL SUPPORT NORTH AMERICAN HOLIDAY SCHEDULE Sybase Technical Support is open on all holidays and provides full service on many. During the limited-service holidays shown below, Technical Support will provide the following coverage: * SupportPlus Preferred and Advantage customers may log all cases; we will work on priority 1 and 2 cases over the holiday. * 24x7 and 24x5 Support customers may log priority 1 cases; we will work on these over the holiday. * SupportPlus Standard, Desk Top, and Regular Support customers may purchase Extended-hour Technical Support for coverage over the holiday. Table 1: Sybase Technical Support limited service holidays - U.S. customers Holiday Date New Year's Day January 1 President's Day February 19 Memorial Day May 27 Independance Day July 4 Labor Day September 2 Thanksgiving November 28 Christmas December 25 Table 2: Sybase Technical Support limited service holidays - Canadian customers Holiday Date New Year's Day January 1 Good Friday April 5 Victoria Day May 20 Canada Day July 1 Labour Day September 2 Canadian Thanksgiving October 11 Christmas Day December 25 Boxing Day December 26 If you have questions, please contact Technical Support. Database online Questions Sybase SQL Server release 11.0 includes a new online database command. This article contains a few commonly asked questions about the "online" state of a database and the use of the online database command. For more information, please see What's New in Sybase SQL Server Release 11.0?, the System 11 SQL Server Migration Checklist supplement to this Sybase Technical News, and the SQL Server Reference Manual. Question Executing a load database leaves the database offline; does load transaction leave a database online or not? Answer load transaction leaves the database the way it found it: if it was offline, it remains offline; if it was online, it comes online again. A customer doing a sequence of load database, load tran, load tran ... will have to use the online database command at the end of the load sequence. A customer who has already brought the database online and who then loads another transaction dump will not have to repeat online database. Question If a database becomes corrupt during boot time recovery and I am able to fix whatever caused the problem, can I just bring the database online with the online command, or will I have to reboot SQL Server? Answer Using the online database command in this context will bring the database online; you won't have to reboot. Note, however, that if the database has been marked suspect, online database will have no effect. Question Can I run dbcc checkalloc or tablealloc with the fix option when a database is offline, instead of having to put the database in single user mode? Answer Yes. You can also do any other dbcc commands in an offline database. Question What causes a database to go offline? Answer There are three things that take a database offline: * load database. This causes a "persistent" offline state, that is, the database stays offline until you issue an online database command. * load transaction. As mentioned above, if the database was online before the load transaction, it comes back online automatically; if it was offline, it stays offline. * Database recovery. This causes a "temporary" offline state, that is, the database comes back online automatically when recovery is finished, unless an error occurs during recovery. _________________________________________________________________ Note The "persistent" offline state overrides a "temporary" offline state. Thus, if you do a load database followed by a load transaction, the offline state set up by the load database persists after the load transaction. That's the mechanism by which load transaction "leaves the database the way it found it" as in the first question above. That's also the way SQL Server detects that a database should remain offline even if the server should crash while the database is in a load sequence. ------------------------------------------------------------------ DUMP COMPATIBILITY ISSUES Question What dumps are compatible between SQL Server 11.0 and earlier releases? Answer The following table shows what SQL Server releases can read dumps from other SQL Server releases. Destination: || SQL Server | SQL Server | SQL Server | SQL Server Source: || 4.9.x | 10.0.x | 10.1 | 11.x ======================================================================= Dump Level 4.9.x || Yes | No | No | No ---------------------------------------------------------------------- Dump Level 10.0.x || No | Yes | Yes | Yes ---------------------------------------------------------------------- Dump Level 10.1 || No | No | Yes | Yes ---------------------------------------------------------------------- Dump Level 11.x || No | No | No | Yes _________________________________________________________________ Note This table applies to the compatibility of physical dumps themselves, rather than Backup Server compatibility. Backup Server releases are only compatible with the SQL Server of the same or previous release level. ------------------------------------------------------------------ Here are a few important issues regarding dump compatibility between SQL Server releases.: * In general, you should never assume backward compatibility: a lower-numbered version of a product probably can't read a higher-numbered version's files. However, higher-numbered versions should be able to read files from lower-numbered versions. You can dump from 10.0.2 and load to 10.1. * 10.1 dump headers have a field in them that can't be read by 10.0.x servers. The new "log version" field was added so that System 11 could distinguish a 10.1 release database by reading the dump header. 10.0.x doesn't recognize this, so 10.0.x can't read 10.1 dumps. * While SQL Server can read several flavors of log record, it can only write log records at its own release level; and if it writes an 11.0 log record into a section of log that currently contains 10.x records, the new records will not be readable during a subsequent load or boot-time recovery because the log reader expects all log records to be in a single format until SQL Server tells it to switch formats. Consequently, when you start up SQL Server 11, one of two things will happen: * If boot-time recovery succeeds, your databases will be online, but SQL Server will refuse to do a logged dump transaction until you do a dump database. This means that until you dump database, you can only do dump transaction with no_log or with truncate_only, because other kinds of dump transaction actually write some log records. truncate_only is allowed because the database is online and therefore writable, and since you aren't trying to dump log records to be read later it's okay to write an 11.x log record. * If boot-time recovery fails for some reason, or if the database is replicated (in which case it will be offline only until replication is finished), you can only do dump transaction with no_log or with no_truncate. The database is offline and unavailable for writing, but no_truncate is allowed because it is not a logged operation and you or Sybase Technical Support (in the case of recovery failure) may find a copy of the log useful. If you are able to bring the database online by executing online database, you will still have to dump database before you can do a logged operation. In both of these cases, you will see Error 4225: This database has not been dumped since it was created or upgraded. You must perform a dump database before you can dump its transaction log. You may also see Error 4226: Logged DUMP TRANSACTION cannot run in database %.*s, because that database's log version (%d) disagrees with the SQL Server's log version (%d); use DUMP TRANSACTION WITH NO_LOG. Versions will agree once ONLINE DATABASE has run. To clear this condition, again, just do dump database. LOCK PROMOTION CHANGES IN SQL SERVER 11 Question Is lock promotion calculated on a per statement basis, or on a per transaction basis? I have a single process executing several thousand update statements to the same table in a transaction, and the EX_PAGE locks are not being promoted to a table-level lock. Is this the behavior I should expect? Answer Lock promotion is performed on a per-statement basis. Additionally, there may be some complex statements for which lock promotion will not be performed at all. We do not take the transaction into account. In System 11, however, you can configure the lock promotion threshold for tables, databases, and servers. This will give you the ability to guarantee that updates use table level locks. For more information about setting the lock promotion threshold, please see Chapter 11, "Locking on SQL Server," in the SQL Server Performance and Tuning Guide. CHANGES TO THRESHOLDS BEHAVIOR IN SQL SERVER 11.0 In SQL Server 11.0, there are some important changes in the way that thresholds behave on the log segment. Thresholds and the syslogshold Table In the situation where you or a threshold process cannot truncate the log because of an open transaction, SQL Server 11.0 provides a new table, syslogshold, which includes information on what process has the open transaction. You can use information in this table to set up a threshold process to deal with the open transaction. For more information on the syslogshold table, please see the SQL Server Reference Supplement. For information about using it in conjunction with threshold procedures, see the chapter "Managing Free Space with Thresholds" in the System Administration Guide. Private Log Cache Feature As a performance enhancement, 11.0 introduces the Private Log Cache (PLC). This feature maintains log records in memory, rather than writing them directly to the log. However, SQL Server must guarantee that space is available in the log to flush these records when the time comes. Consequently, the PLC "reserves" log pages¯ that is, it marks space as used before actually writing to it. Presently, this reservation equals three pages per open transaction per database. This reserved space counts as "used pages" toward triggering the threshold procedure, even though under some circumstances that space won't actually be written. The effect for you is that in the log segment, thresholds will trigger sooner than it looks like they should: there may appear to be more empty space in the log segment than the threshold claims there is. This occurs because SQL Server reports pages actually written, while it acts on pages reserved. This will be particularly noticeable if your site has many users with open transactions in a single database. _________________________________________________________________ Note Remember that the rule is three pages reserved per transaction per database: a thousand users with a transaction open in one database will have reserved 3000 pages in the log for that database. ------------------------------------------------------------------ THRESHOLD PROCESS FAILS TO FIRE Question Sometimes a threshold process that dumps the transaction log when the threshold is overrun fails to actually do the dump tran. Why did it fail? Answer There are many reasons the threshold procedure can fail to start. SQL Server checks all of the following, in order: 1. That it can allocate memory for the information the threshold procedure will need. 2. That it can create a task to execute the procedure. 3. That the threshold procedure can use the database. 4. That the systhresholds table exists. 5. That there is an entry in systhresholds for the threshold in question. 6. That the threshold owner is a valid user in the database. 7. If the database is usable by the database owner only, that the user who bound the threshold is the database owner. 8. That the threshold owner is a valid server login. 9. That the procedure named in the threshold is valid. The step that concerns us here is step 3. If your procedure failed, check to be sure that the database is not in single-user mode. If a database is in single-user mode, and the user overruns a threshold, the threshold process won't succeed in doing any work in that database, because the maximum number of users allowed in the database is one and the user who overran the threshold is that one. The threshold procedure must be in the database before it starts work; if it cannot use the database, SQL Server prints Error 7403 and stops: Threshold task could not use database %d, and so cannot execute the threshold procedure for segment %d, free space %ld. This behavior affects all threshold procedures, regardless of whether they do any work that is not directly involved with the database. MAXIMUM TIMESTAMP VALUE Question Is there a maximum value for the timestamp in the log? If there is, what happens when that number is reached? Answer Yes, there is a maximum timestamp value. If we ever reach that value, the next assigned timestamp value would be 0 (not 1). If the timestamp does roll over, corruption might result; however, before this happens, you will be warned. SQL Server attempts to warn you that the database is approaching the maximum timestamp value by checking that the current database timestamp value is less than 0xffff 0xfefffffff. This validation check occurs each time the dbtable is created for the database. If the timestamp is greater than 0xffff 0xfefffffff, SQL Server raises Error 935: WARNING - the timestamp in database `%.*s' is approaching the maximum allowed. When you receive Error 935, you should do one of the following as soon as possible: * Create a new database the same size as the old, execute sp_dboption "select into" for that database, then use select into to recreate the tables. * Bulk copy your data out, drop and re-create the database, then bulk copy the data back in. Explanation Sybase timestamps are an unsigned 48-bit number; that is, they can hold integer values from 0 through 248 -1 or 281,474,976,710,655. In recovery, SQL Server decides what has and hasn't been done by comparing timestamps; if the timestamp on the page is smaller than the one in the log record, then this change hasn't been made, so SQL Server makes it. If the timestamp were ever to roll over, the timestamp in a log record might well be something like 0xffff.ffffffac, and the timestamp on the page something like 0x0000.00000013. Logically, the timestamp on the page should be later than the one in the log, because the timestamps have wrapped around, but SQL Server won't detect that. All SQL Server knows is that the page timestamp is smaller than the log record timestamp, so it will make the change specified by the log record, thus corrupting the data page. Bear in mind that timestamp can take quite a long time to roll over. For instance, suppose that your server makes a change in the database every millisecond (1000 changes per second), all day every day. At that rate, it will take more than 8,900 years for the timestamp to roll over. (248 / (1000 60 * 60 * 24 * 365.25) = 8919.4) However, improper use of dbcc rebuild_log or dbcc save_rebuild_log could result in the timestamp reaching the maximum value. When this occurs, SQL Server will generate Error 6901: Overflow on High component of timestamp occurred in database %d. Database table possibly corrupt. Again, if you receive this error, use one of the methods described above to re-create your database. _________________________________________________________________ Note This information applies to SQL Server 11.0 as well as to previous releases. ------------------------------------------------------------------ DUMPING MULTIPLE PRODUCTS TO ONE TAPE Question I want to back up all my databases, both Sybase and Oracle, onto the same tape each night in a batch. How do I do this? Answer You can't. Backup Server expects a tape to conform to a format similar to ANSI standard, where the first record on the tape is a "volume header", followed by some "file header" records. Because other vendors do not try to use ANSI standard format, Backup Server cannot use tapes that contain other vendors' data. Sybase has no plans to change this in the foreseeable future. DUMP TRANSACTION AND ERROR 4207 Question When I try to dump transaction in my database, I get error 4207, which says that dump transaction is not allowed while the select into/bulk copy option is enabled. The problem is, that option isn't actually enabled in that database. What happened? Answer What you're seeing is the old text of error 4207. It caused a lot of confusion, so Sybase changed it. As of SQL Server 11.x, it says: Dump transaction is not allowed because a non-logged operation was performed on the database. Dump your database or use dump transaction with truncate_only until you can dump your database. What happened to you is one of several things: * You performed a fast (unlogged) bulk copy into your database. * You performed a select into a table in your database. * You used dump transaction with truncate_only or no_log in your database. All of these operations make changes that are not recorded in the transaction log. Because the log doesn't have a record of those changes, it isn't possible to rebuild your database from only the log. Thus, as soon as one of those things happen in a database, SQL Server disallows dump transaction until the next dump database has been performed. The database dump gives the server the necessary information to re-create those changes, so it is then possible to use transaction log dumps to re-create the rest of the changes. TRACE FLAGS 322 AND 323 IN 4.9.x & 10.x _________________________________________________________________ Question I have two 4.9.2 SQL Servers running at different Rollup levels. Certain queries containing subqueries run much more slowly on the newer Rollup than on the older one. The showplan output reveals that the newer Rollup uses a four-step plan where the older one uses only two steps. I have heard that two SQL Server trace flags, 322 and 323, might help me. What do these trace flags do? Answer A fix for an optimizer bug, 17230, changed the behavior of the optimizer. 17230 ensures that subqueries under ors are not unnested, so that correct results are returned in all cases. The following table lists the platforms and Rollup numbers in which this change first appeared: Platform 4.9.2. Rollup Number SunOS Release 4.x (BSD) 4152 HP 9000 Series 800 HP-UX 4153 IBM RISC System/6000 AIX 4154 AT&T System 3000 UNIX SVR4 MPRAS4155 Digital OpenVMS VAX 4156 Sun Solaris 2.x 4157 Digital OpenVMS Alpha 1.5 4158 Digital UNIX n/a If you are running one of these Rollups, or any Rollup released later, you may find that you now get poor performance with certain queries containing subqueries. To fix this, you can try using trace flags 322 or 323. Trace flag 322 is to be used at SQL Server startup; trace flag 323 is the interactive version, invoked with the command dbcc traceon(323). Both trace flags disable the fix for bug 17230. _________________________________________________________________ WARNING! Disabling this fix may return performance for these queries to your previous levels, but may also cause SQL Server to return two sorts of incorrect results: * Because subqueries are processed as joins, existence checks may return duplicates where they should not. * The construction or x in this statement: select a from b where... will not work when b is empty. ------------------------------------------------------------------ If you are running SQL Server release 10.x and encountering this particular performance problem, you may use EBFs 4428 and higher, which include these trace flags. These trace flags are not available in SQL Server 11.0 because subquery processing has been substantially rewritten. INSTALLING ASYNC I/O ON HP-UX 10.0 VIA SAM Release 10.0 of HP-UX allows for the installation of the asynchronous driver using SAM (System Administration Manager). This alleviates the need to run the installasync script which is currently shipped with Sybase SQL Server Releases 10.0 and below for HP- UX. To install async with SAM, follow these steps: 1. Invoke SAM and select Kernel Configuration. 2. Select Drivers within the Kernel Configuration menu. 3. Change the Pending State for asyncdsk to In. 4. Rebuild the Kernel and Reboot the system (using the Actions menu option). 5. Execute the following statements as "root": /etc/mknod /dev/async c 101 5 chmod 0660 /dev/async chown sybase /dev/async chgrp sybase /dev/async _________________________________________________________________ Note Step 5 may also be performed prior to step 1. ------------------------------------------------------------- In fact, as of System 11, Sybase will no longer ship an installasync script. Instead, use the steps described above. If you have questions abut SAM, please call HP Technical Support. ULTRIX: 4.4 VS. 4.3a COMPATIBILITY Sybase has determined that Digital Ultrix 4.4 is not binary compatible with Ultrix 4.3a, and we do not recommend that customers run the 4.2 SQL Server under Ultrix 4.4. Sybase has no plans to recompile the SQL Server on Ultrix 4.4; any further issues relating to the compatibility of Ultrix 4.4 should be addressed directly to Digital Equipment Corp. LATEST ROLLUPS FOR SQL SERVER The following table lists the latest Rollups for SQL Server on all platforms. Platform Release Number Rollup Number Sun OS Release 4.x (BSD) 10.0.2 5539 Sun OS Release 4.x (BSD) 4.9.2 5631 HP 9000 Series 800 HP-UX 10.0.2 5540 HP 9000 Series 800 HP-UX 4.9.2 5632 IBM RISC System/6000 AIX 10.0.2 5541 IBM RISC System/6000 AIX 4.9.2 5633 AT&T (NCR) System 3000 UNIX SVR4 MPRAS 10.0.2 5542 AT&T (NCR) System 3000 UNIX SVR4 MPRAS 4.9.2 5634 Digital VAX OpenVMS 10.0.2 5543 Digital VAX OpenVMS 4.9.2 5635 Sun Solaris 2.x 10.0.2 5544 Sun Solaris 2.x 4.9.2 5636 Digital OpenVMS Alpha 1.5 10.0.2 5545 Digital OpenVMS Alpha 1.5 4.9.2 5637 Digital OSF/1 10.0.2 5549 Digital OpenVMS Alpha 1.0 4.9.2 5637 Novell Netware 10.0.2 5547 OS/2 10.0.2 4726 Windows NT 10.0.2 5546 In the case of 4.9.2 Rollups, there are subsequent one-off SWRs (formerly EBFs) that you may order for specific bug fixes, but Sybase recommends you upgrade rather than ordering a one-off SWR at this point. _LATEST ROLLUPS FOR PC PLATFORMS_ The following tables list the latest rollups of Sybase products other than SQL Server for PC platforms. Table 3: EBFs for DOS Platforms Product Group Release Number SWR Number Open Client/C Developers Kit 10.0.2 4907 DB-Library 4.2.5 4987 DB-Library 4.2.6 4987 Net-Library FTP PC/TCP 1.0.3 3666 Net-Library Microsoft TCP 1.0.3 5408 Net-Library Named Pipes 1.0.2 2387 Net-Library Novell IPX/SPX 1.0.2 3661 Net-Library Novell LAN Workplace 1.0.3 3665 SQR Workbench 2.5 2489 SQR Execute 2.5 2490 Table 4: EBFs for Netware Platforms Product Group Release Number SWR Number Open Client/C Developer's Kit 10.0.3 5509 DB-Library 4.6 2849 Replication Server 10.1 4922 Table 5: EBFs for OS2 Platforms Product Group Release Number SWR Number Open Client/C Developer's Kit 10.0.3 5677 DB-Library 10.0.2 4146 DB-Library 4.2 4721 Net-Library Named Pipes 1.0.2 3904 Net-Library Named Pipes 2.0 2698 Net-Library Novell SPX/IPX 1.0.2 3982 Net-Library Novell IPX/SPX 2.0 1612 Net-Library Novell LAN WorkPlace 1.0.2 2534 Net-Library IBM TCP 10.0.1 3184 SQR 2.4 1822 Open Server 10.0.2 3905 Open Server 2.0 3436 Table 6: EBFs for PC Windows Platforms Product Group Release Number SWR Number Open Client/C 10.0.3 5735 DB-Library 4.2.5 5185 Net-Library FTP PC/TCP 10.0.1 3777 Net-Library Named Pipes 10.0.1 5303 Net-Library NEWT 1.0.3 3158 Net-Library Novell LAN WorkPlace 1.0.2 2472 Net-Library WinSock 1.0.3 5146 ODBC 10.0.1 5736 Embedded SQL/C Precompiler 10.0.2 4653 Embedded SQL/Cobol Precompiler 10.0.2 4269 Embedded SQL/C Precompiler 4.0.4 3607 SQL Monitor Client 10.1.2 4723 SQR Workbench 2.5 2492 SQL Server Manager 10.3 5099 Table 7: EBFs for Windows NT Platforms Product Group Release Number SWR Number Open Client/C Developer's Kit 10.0.3 5655 Replication Server 10.1 5112 Open Server 10.0.3 5513 Manager Server 10.1 4117 _________________________________________________________________ Disclaimer: No express or implied warranty is made by SYBASE or its subsidiaries with regard to any recommendations or information presented in SYBASE Technical News. SYBASE and its subsidiaries hereby disclaim any and all such warranties, including without limitation any implied warranty of merchantability of fitness for a particular purpose. In no event will SYBASE or its subsidiaries be liable for damages of any kind resulting from use of any recommendations or information provided herein, including without limitation loss of profits, loss or inaccuracy of data, or indirect special incidental or consequential damages. Each user assumes the entire risk of acting on or utilizing any item herein including the entire cost of all necessary remedies. _STAFF_ Principal Editor: Leigh Ann Hussey Contributing Writers: Lance Andersen, Peter Dorfman, Sekhar Prabhakar, Loretta Vibberts, Elton Wildermuth Send comments and suggestions to: SYBASE Technical News 6475 Christie Avenue Emeryville, CA 94608 or send mail to technews@sybase.com Copyright 1996 © Sybase, Inc. All Rights Reserved. Q5.8: TECHNICAL NEWS Special Supplement February, 1996 _________________________________________________________________ This supplement to the SYBASE Technical News contains important information for those about to migrate to the System 11 release. If needed, duplicate this supplement and distribute it to others in your organization. These changes are also documented in more detail in the SQL Server Reference Manual released with System 11, in What's New in Sybase SQL Server Release 11.0?, and in the Release Bulletin. SQL Server 11 Database and Application Migration Checklist If you plan to upgrade to SQL Server release 11.0, keep in mind the following changes and new features as you prepare your existing installation for the upgrade. Migration Issues from System 10 and Earlier Releases * Databases Online / Offline * New File for Configuration Values * Deadlock Checking Period * New page utilization percent Parameter * Query Processing Changes * Memory and Cache * sybsystemprocs Changes * New and Changed Error Messages * New Keywords * Backup Server Changes * New Output from System Stored Procedures Migration issues from releases prior to System 10 only * Dumps and Loads are Handled Differently * Remote Connections are Automatic * Backup Server Must be Started/Stopped * Backup Server in the Interfaces File * The null Device Goes Away * Changes to Dump Scripts * Loading from Multiple-Tape Devices * No dump tran to Device After Non-Logged Text Writes * No Dumps from Within Transaction * "Run" File Name Change * New Database for Stored Procedures * Thresholds * New Login/Password Protocols * New create table Permission * New numeric Datatype * Display Format Changes * Online Datatype Hierarchy * Conversion Changes * Change to set arithabort/arithignore * Changes in Subquery Processing * Change in Comment Protocol * Change in Permitted Syntax * No more NULL Column Headings * More Consistency Required for Correlation Names Last Minute Checklist Important Steps After the Upgrade _MIGRATION ISSUES FROM SYSTEM 10 AND EARLIER RELEASES_ This section applies to you if you are migrating from SQL Server release 10.0, or any release prior to 10.0, to release 11.0. _DATABASES ONLINE / OFFLINE_ Beginning with SQL Server release 11.0, as part of the automatic upgrade mechanism, a database has two states, online and offline. Issuing a load database command takes a database offline. If you have scripts that load databases, you must add an online database command to your script to make the database available again after the load sequence completes. A load sequence consists of a load database execution and a complete set of load transaction statements. _NEW FILE FOR CONFIGURATION VALUES_ In releases prior to SQL Server release 11.0, SQL Server stored configuration values in the first page of the master device, known as the "config block". Most of these values have been moved to a flat file known as the configuration file. As you upgrade, there are several things to keep in mind regarding this file. * Backing up the master database does not back up the configuration file. You must either back up your configuration file separately and restore it when you load your database, or else, after loading a master database, you must issue sp_configure with the restore option and shutdown and restart SQL Server. * Some of the configuration parameters available through sp_configure have new names. For example, memory is now called total memory. If you run sp_configure memory, value, you will get this error: "Configuration option is not unique." If you have scripts that use sp_configure to set or report on configuration parameters, you will need to change them if the names have changed. You should test all your scripts that use parameter names listed in the table "New configuration parameter names" in Chapter 3, "Changes That May Affect Existing Applications," of What's New in Sybase SQL Server Release 11.0? * The reconfigure command is no longer required after running sp_configure. Any of your scripts that include reconfigure will continue to work; the reconfigure command is ignored. You may want to consider removing reconfigure commands from your scripts now, however, as they may not be supported in future releases. * If you currently set a trace flag in your runserver file, it may now need to be set using sp_configure or by editing the configuration file. For example; if you use a trace flag to print deadlock information to your error log (-T1204 in your runserver file on UNIX), you must remove it from your runserver file and set deadlock information printing with sp_configure or by editing your configuration file. The 1204 trace flag in the runserver file will no longer work. You should check whether other trace flags you are currently using have been converted to the configuration file and reset them after you upgrade. Some of these trace flags are 1603, 1610, and 1611. Trace flags that are now configuration parameters are listed in Table 3-2, "New configuration parameter names" in Chapter 3, "Changes That May Affect Existing Applications," of What's New in Sybase SQL Server Release 11.0? * The buildmaster executable no longer supports the -y and -r flags. You must use sp_configure or the configuration file in place of these flags. If you have scripts that use these flags, you must rewrite them using sp_configure, or save and edit copies of the configuration file. Refer to "Setting Configuration Parameters" in the System Administration Guide for information about the configuration file and sp_configure options. _DEADLOCK CHECKING PERIOD_ SQL Server release 11.0 introduces a new configuration parameter, deadlock checking period. SQL Server performs deadlock checking after a minimum period of time for any process waiting for a lock to be released. By default, this minimum period of time is 500 milliseconds, but you can change this parameter with deadlock checking period. If you expect your applications to deadlock infrequently, you can reduce overhead cost by delaying deadlock checking. However, increasing deadlock checking period causes longer delays before deadlocks are detected. SQL Servers prior to release 11.0 initiated deadlock checks as soon as a task had to wait for a lock. To get the same behavior in release 11.0, set the deadlock checking period to 0 in the configuration file. _NEW PAGE UTILIZATION PERCENT PARAMETER_ A new configuration option, page utilization percent, saves time by allocating new extents rather than searching the OAM page chain. This makes page allocation faster, but may waste space. The default behavior of previous versions of SQL Server is always to search the OAM page chain for unused pages before allocating a new extent. To keep this behavior in SQL Server release 11.0, set the page utilization percent to 100 in the configuration file. _QUERY PROCESSING CHANGES_ A new strategy known as MRU (most recently used or fetch-and-discard) is available to the optimizer. This strategy is used in cases where a large query would overwrite heavily used pages in cache. For example, on SQL Server releases prior to 11.0, users running small transactions often find the necessary pages in cache (avoiding physical I/O). When another user runs select * from a very large table, the cache is overwritten with pages from this table. This forces the other users' queries to use more physical I/O because their pages are no longer in cache. The MRU strategy is designed to reuse the same buffers in cache and to avoid overwriting all existing pages. You can execute set showplan on and then run your query to see what plan the optimizer chooses. If you see LRU (least recently used), your SQL Server is running with pre-11.0 behavior; if you see MRU, you are running with the new behavior. You may override the MRU plan if you are unhappy with its performance by using the lru option in queries, as described in the section on select in the SQL Server Reference Manual. To get pre-11.0 behavior for your queries, construct it as follows: select * from table (index table prefetch 2 lru) _SUBQUERY PERFORMANCE CHANGES_ Subquery handling has been improved for SQL Server release 11.0. For most subqueries, SQL Server 11.0 should give you better performance than 10.x. You may see different results in some cases because of bugs that were fixed in SQL Server release 11.0. Test all your subqueries under SQL Server release 11.0 before transferring production databases to the new system. _________________________________________________________________ Note If you notice performance degradation after upgrade, check your data cache and increase it if necessary. SQL Server 11.0 needs more memory than 10.0, and will take memory from the data cache if it doesn't have enough, causing the cache to shrink. Subqueries are no longer allowed in updatable cursors. One type of subquery may be slower in 11.0 than 10.x: this involves an expression subquery where the outer table is very large and has few duplicate correlation values, the inner table is small, and the subquery contains an aggregate. In this case, the optimizer will not flatten the query to be processed as a join. Such a query might look like this: select * from huge_table where x= (select sum(a) from tiny_table where b = huge_table.y) To get faster results, you can reformulate the query to mimic the behavior of System 10, as follows: select huge_table.y, s=sum(a) into #t from huge_table, tiny_table where b=huge_table.y group by huge_table.y select huge_table.* from huge_table, #t where x=#t.s and huge_table.y=#t.y If you have stored procedures, triggers or views that contain subqueries, they will not automatically be upgraded to take advantage of the new subquery changes. Until you drop and re- create the compiled object, it will continue to behave as it did in earlier releases. Once you drop and re-create it you will see the new performance and results expected from an 11.0 subquery. If you are upgrading a test system you may want to rename your old stored procedure, create the new 11.0 style procedure and run tests to see the differences in performance and results. _________________________________________________________________ _WARNING!_ After upgrading a production system, drop, re-create, and test the behavior of all compiled objects containing subqueries. If you leave your procedure with the old behavior and then have to drop and re- create for some reason, you will see the new 11.0 behavior. There is no way to go back to the old behavior once the procedure is dropped and re-created. If the new behavior does not work well for your production system, you are stuck. It is best to test out procedures that contain subqueries and make any changes necessary for your application before upgrading your production system. Once the upgrade is complete, drop and re-create all compiled objects containing subqueries. To determine which compiled objects contain subqueries and whether they are running at 11 level or pre-11 level, use the stored procedure sp_procqmode. Refer to the SQL Server Reference Manual for details. The output for set showplan on has been changed. If you have any applications that rely on the output generated by this command, they may need to be changed. set dup in subquery No Longer Supported The set dup in subquery command introduced in SQL Server 10.0 will no longer be supported. If you have applications that use it you will receive a warning message and your subqueries will no longer return duplicates. You may have used this option to obtain better performance. Because of subquery processing changes introduced in this release, if you really want duplicates, rewrite your query as a join. You should see better performance in SQL Server release 11.0 for these types of queries. Only 16 Subqueries on One Side of a union A new restriction has been imposed. You are now allowed only 16 subqueries on one side of a union. This should not affect most customers because you are only allowed 16 tables within one query. This should affect you only if you have more than 16 subqueries and some of them have no from clauses. Subqueries and NULL Results Prior to SQL Server release 11.0, a correlated expression subquery in the set clause of an update returned 0 instead of NULL when there were no matching rows. SQL Server release 11.0 correctly returns NULL when there are no matching rows, and raises an error. If you have applications that depend on the pre-11.0 behavior, you will need to rewrite them. For example, the following trigger tries to update a column that does not permit NULL values: update t1 set c1 = (select max(c1) from inserted where t1.c2 = inserted.c2) The correct trigger is: update t1 set c1 = (select isnull(max(c1), 0) from inserted where t1.c2 = inserted.c2) The where clause updates t1.c1 to 0, if the subquery does not return any correlation values from the outer table t1. Memory and Cache More memory is used for the Sybase kernel and for internal structures including the new user log cache. You may need to add more total memory to your server to maintain the same performance as your previous release. Also, compiled objects have grown in SQL Server release 11.0 and grew considerably in 10, so you may need to enlarge your procedure cache to maintain the same performance. sybsystemprocs Changes Your sybsystemprocs database will need to be larger than it was for release 10.x SQL Server, to make room for the new system stored procedures in release 11. Check your SQL Server Installation Guide for the correct size, and alter sybsystemprocs before beginning the upgrade. If sybsystemprocs is not the correct size, sybinit will fail. During upgrade from System 10 to SQL Server release 11.0, all system stored procedures are dropped from sybsystemprocs. If you have customized any of your system stored procedures, you will lose them in this process unless you rename them before the upgrade. If you do not rename them, you will have to re-customize them after the upgrade. _NEW AND CHANGED ERROR MESSAGES_ Many error messages have been added and some have had their text changed to improve their intelligibility. If you rely on the text of any error messages within your applications, you should check to be sure they have not changed. You can select * from sysmessages on SQL Server release 11.0 to see the text changes. _NEW KEYWORDS_ Many new keywords were added to the SQL Server release 10, most to support ANSI 89 features. There are also two new keywords added in SQL Server release 11. * Any database whose name is a new keyword must have its name changed before you upgrade. * Any object (table, column, and so on) whose name conflicts with a new keyword will yield a syntax error when accessed under a release 11 SQL Server. We recommend that you change their names before upgrade; you will also have to change any applications that reference those objects. For example: select user from table_x will yield a syntax error because "user" is a keyword as of System 10. SQL Server release 11 includes a stored procedure, sp_checkreswords, which checks for identifier names that are keywords. Use sybinit to run this stored procedure before you start the upgrade itself. Load the Sybase files with sybload, begin a sybinit session (as described in the SQL Server Installation Guide), and select "Check for reserved word conflicts" from the "SQL Server Upgrade" menu. If your existing databases use any of the new keywords as identifiers, sybinit displays the following message, including the number of conflicts it found: _Warning:_ x conflicts with 10.0 reserved words were found. Sybase suggests that you resolve these conflicts before upgrading the SQL Server. Run `sp_checkreswords' on each database for more info. You must change database names that conflict with reserved words immediately, because the upgrade will fail if database names conflict with reserved words. We also recommend that you change any other object names (tables, columns, and so on) that are reserved words before upgrading. Remember that you will have to change all applications that reference that object also. See the SQL Server Reference Manual for information on changing object names. If you choose not to change other object names, you can use the set quoted_identifier option. You must add the following set command to all your applications and put quotes around all keywords when you issue your T-SQL statements. For example: set quoted_identifier on select "user" from table_x _WARNING!_ The sp_checkreswords procedure will not check the contents of stored procedures. For example, if you have a procedure such as the following: create procedure x as create table user (a int) you will receive a syntax error on running the procedure. Test all your stored procedures and triggers to be certain that they use no keywords as identifiers. sybinit installs sp_checkreswords automatically. If you subsequently want to run reserved word checks, use the following command sequence for each database you want to check: % isql -Usa -Ppassword -Sservername 1> use database_name 2> go 1> sp_checkreswords 2> go _NEW OUTPUT FROM SYSTEM STORED PROCEDURES_ Many of the existing system stored procedures provide new and improved output. If you are currently running them within your applications, you should check to see what they report under SQL Server release 11.0. _BACKUP SERVER CHANGES_ The Backup Server has a new feature to deal with tape devices unfamiliar to it. If you dump a database to a tape device that is not one of the devices mentioned in the System Administration Guide Supplement for your platform, and Backup Server cannot determine the device type, the dump command fails. Consequently, when you first dump to a new tape device, you should use the with init option to the dump command. It will take some time for the Backup Server to read and write to the tape in order to determine how to communicate with it. When Backup Server finishes this first test, it will write a line to the new tape configuration file, called $SYBASE/backup-tape.cfg by default, which is created during upgrade or install. You should manage this file as part of the backup strategy for your site. _MIGRATION ISSUES FROM RELEASES PRIOR TO SYSTEM 10 ONLY_ This section applies specifically to you if you are migrating to SQL Server release 11.0 from a SQL Server release prior to 10.0. You should review both this section and the previous section, ``Migration Issues from System 10 and Earlier Releases''. These features were added in System 10, and also exist in SQL Server release 11.0. _DUMPS AND LOADS ARE HANDLED DIFFERENTLY_ Dumps and loads are now handled by a separate process called the Backup Server. This runs in addition to SQL Server, so you may need to increase memory limits. When upgrading your database, you must install a Backup Server through the install procedure (sybinit) or you will be unable to do dumps and loads on your System 10 or 11 SQL Server. On UNIX platforms, additional small processes called sybmultbuf will be started by the Backup Server to communicate with your database and dump devices. You should run some tests with dump and load to be sure your scripts work and to monitor the additional machine resources needed to do a dump or load on your system. Recovery procedures on system databases have changed with the Backup Server and the sybsystemprocs database. Consult your System Administration Guide and test your procedures for recovery. _REMOTE CONNECTIONS ARE AUTOMATIC_ Remote connections are now automatically enabled when SQL Server is upgraded or installed. This is necessary so that SQL Server can communicate with Backup Server. However, the `allow remote access" configuration parameter is now dynamic, so you will no longer have to reboot SQL Server in order to change this behavior. _BACKUP SERVER MUST BE STARTED/STOPPED_ You must now start and stop Backup Server, as well as the SQL Server. The install or upgrade program will set up the appropriate scripts to start Backup Server. There is an option in the shutdown command to stop Backup Server: shutdown [backup_server] [with {wait|nowait}] You should initiate your own procedures for starting and stopping the Backup Server process as appropriate. If you will be using threshold procedures to dump transaction to a device, you should always have Backup Server running. _BACKUP SERVER IN THE INTERFACES FILE_ You must now maintain entries in your interfaces files for Backup Servers you use. The install or upgrade program will make the necessary entry for your local machine, but you may need to determine if those entries need to be distributed to other copies of your interfaces file. _THE NULL DEVICE GOES AWAY_ The device /dev/null on UNIX or NL on VMS will no longer be available. sybinit will remove the default entries for these devices from the sysdevices table. Check any scripts that do dumps to be sure you are not using one of these default devices or any other dump device that points to /dev/null or NL. If you do not change these scripts, your dumps will fail with the following error: Backup Server: 4.56.2.1: Device validation error: couldn't obtain tape drive characteristics for device /dev/null, error: Invalid argument _CHANGES TO DUMP SCRIPTS_ Backup facilities have changed as of SQL Server release 10.0. This section describes the minimum changes you must make to existing dump scripts. The single feature that can most affect your current use of tapes and dump commands is Backup Server's ability to make multiple dumps to a single tape. The new dump is placed after the last existing file on the current tape volume. Here are some guidelines for backing up your databases immediately after upgrading: * If you use new tapes, or tapes without ANSI labels, your pre-10.0 dump scripts overwrite the entire tape. * If you use single-file media (for example, quarter-inch cartridge) with ANSI labels, and the expiration dates on the tapes have expired, pre-10.0 dump scripts will overwrite the tapes. * If the expiration date on a single-file tape has not been reached, you will be asked to confirm the overwrite; a positive response will overwrite the existing tape; a negative response initiates a request for a volume change, and tests are repeated on the new volume. * If you use multi-file tape media, and do not change your dump scripts, the dump will be appended to the existing files on the tape. * If you want to overwrite existing tapes that have ANSI labels, you must append the with init clause to existing dump commands: dump database mydb to datadump1 with init You can also use operating system commands to erase or truncate the tape. _LOADING FROM MULTIPLE TAPE DEVICES_ A second connection to SQL Server is now required when loading a database to a tape device that spans multiple tapes. This allows you to issue the sp_volchanged stored procedure (which notifies Backup Server that the tape operator has finished handling a volume, such as changing a tape). If you execute the command load database master to a tape device that requires a volume change, you will need a second running SQL Server to issue the sp_volchanged stored procedure. This is because SQL Server must be in single user mode to load master so you cannot login a second time to send the volume change request. For this reason, Sybase Technical Support strongly recommends you ensure your tape device has enough space to hold the full dump of master before you dump the master database, or else that you use a disk device for your master backups. _CHANGES TO RENAMING DATABASES_ If any table in the database references¯or is referenced by¯a table in another database, sp_renamedb cannot rename the database. It produces the following error message: Database `database_name' has references to other databases. Drop those references and try again. Execute the following query to determine which tables and external databases have foreign key constraints on primary key tables in the current database: select object_name(tableid), db_name(frgndbid) from sysreferences where frgndbname is not null Execute the following query to determine which tables and external databases have primary key constraints for foreign key tables in the current database: select object_name(reftabid), db_name(pmrydbid) from sysreferences where pmrydbname is not null Before renaming the database, you must use alter table to drop the cross-database constraints in these tables. See sp_renamedb in the SQL Server Reference Manual for more information about renaming databases. No dump tran to Device After Non-Logged Text Writes dump tran to a device is no longer allowed after a non-logged text operation. If you use both on the same database, you must change your text writes to be logged or you will be unable to use dump tran as part of your backup scheme. For details on changing your text writes, consult the SQL Server Reference Manual (writetext command), and either the DB-Library Reference Manual (dbwritetext(), and so on) or Open Client Client-Library Reference Manuals (ct_send_data(), and so on), as needed. _NO DUMPS FROM WITHIN TRANSACTION_ dump database and dump transaction are no longer allowed within a userdefined transaction. _"RUN" FILE NAME CHANGE_ On UNIX platforms, the default file that startserver looks for to start your server is now called RUN_SYBASE rather than RUNSERVER. If you have a file called RUNSERVER in your install directory, during upgrade sybinit will change its name to RUN_SYBASE. If you use the startserver -fRUNSERVER command to start your server, you must change it to startserver -fRUN_SYBASE. If you are starting your server automatically when you boot your machine, make sure you change your system startup file if necessary. _NEW DATABASE FOR STORED PROCEDURES_ System stored procedures are stored in a new database called sybsystemprocs. Find space for it on an existing device or find a new physical device to give to sybinit before you upgrade. You may also need to modify your dbcc, backup and recovery procedures to include this database. If you have your own system stored procedures, you may want to move them to the new database, although it is not required. If you do decide to move them, you must add the database name to any tables you reference that exist in master. You also need to include a check such as: if @@trancount > 0 begin print "Can't run this procedure from within a transaction" return 1 end In addition, you should not have any changes to master database tables within a transaction. Doing so can cause recovery problems on the master database. _THRESHOLDS_ The threshold manager is a new tool for managing space in segments, particularly the log segment, but you need to decide how to use it before you upgrade. In particular you should deal with the "last chance threshold" on the log before you upgrade. The default behavior is to suspend all users when the last chance threshold is reached; if you encounter the last chance threshold, all users will just hang until some action is taken, instead of getting the 1105 error. Decide whether that is the behavior you want and test a last chance threshold procedure before you upgrade your production databases. You must use the sp_thresholdaction stored procedure to define an action, such as dump transaction, when the last chance threshold is reached. If you do not do so, when you run out of space in your log, all users will hang indefinitely. Below is a sample threshold procedure that you can tune to suit your installation. This example creates a threshold procedure that dumps the transaction log to a tape device when the last chance threshold is reached: create procedure sp_thresholdaction @dbname varchar(30), @segmentname varchar(30), @space_left int, @status int as dump transaction @dbname to "/dev/rmt4" Remember, even if you have a last chance threshold procedure to dump the log, you will have a problem if you have an open transaction filling the log. The dump transaction command will not clear the log because of the open transaction, and the user who has the open transaction will be in suspend state and so will keep the transaction open. If this occurs, you can use the lct_admin function documented in the "System Functions" section of the SQL Server Reference Manual. That document also describes how to set thresholds with sp_addthreshold. You will know that users are hung and have reached the last chance threshold because sp_who shows a status of "LOG SUSPEND." In addition, SQL Server will write messages to the error log listing how many tasks are sleeping because the log is full. You can change the last chance threshold behavior for a database to the old behavior of "abort the transaction with an 1105 error" by setting "abort xact when log is full" on with sp_dboption. Tip for Bulk Copy Users Bulk Copy handles input records in "batches" which are transactions. This is true whether or not the bcp inserts are logged. When loading a large number of records, use the bcp ... in ... -b records batch option, and set the number of records to some reasonable value. This reduces the chance that a bcp command will hold a long transaction and block dump transaction from clearing enough log space. _NEW LOGIN/PASSWORD PROTOCOLS_ New logins and password changes require a minimum 6-byte password. Existing passwords less than 6 bytes are left alone during upgrade, but new password changes will enforce the 6-byte minimum. _NEW CREATE TABLE PERMISSION_ Beginning with the System 10 SQL Server, create table permission is explicitly granted for all users on tempdb. This permission is granted at server startup time. _NEW NUMERIC DATATYPE_ A new numeric datatype is now available in the System 10 SQL Server. Unlike float, it is platform independent and exact. If you give the System 10 SQL Server a constant such as: 5.1 it will be assigned the numeric datatype rather than float. If you want float, you must represent your constant as: 5.1e0 The following mathematical functions now return a value of type numeric rather than float: * abs * ceiling * degrees * floor * power * radians * round * sign If you are running a pre-System 10 front end with a System 10 Server, numeric datatype value will be mapped to float on the front end. _DISPLAY FORMAT CHANGES_ The isql display format for approximate-numeric datatypes now displays additional digits of precision. Maximum units of precision for storage and display are machine dependent. real values now display up to 9 digits of precision; float values, up to 17 digits. Values are rounded to the last digit on display. Previously, only six places to the right of the decimal were displayed. All values requiring more digits than the maximum are displayed in scientific notation, that is, a float value "1e18" is displayed as such, rather than 1000000000000000000.000000. Note that the new exact numeric types, decimal and numeric, display the entire number. _ONLINE DATATYPE HIERARCHY_ You can get the datatype hierarchy for SQL Server release 11.0 by running the following query: select name, hierarchy from systypes order by hierarchy name hierarchy ---------------------------- --------- floatn 1 float 2 datetimn 3 datetime 4 real 5 numericn 6 numeric 7 decimaln 8 decimal 9 moneyn 10 money 11 smallmoney 12 smalldatetime 13 intn 14 int 15 smallint 16 tinyint 17 bit 18 varchar 19 sysname 19 nvarchar 19 char 20 nchar 20 varbinary 21 timestamp 21 binary 22 text 23 image 24 (28 rows affected) In pre-System 10 SQL Servers, money was above float in the hierarchy. It is now below both float and numeric. The following query: select $12*8.9 returns a result of type numeric. In pre-System 10 SQL Servers it returned money. Likewise, the following query: select $12*8.9e0 returns a result of type float. In pre-System 10 SQL Servers it returned money. If you want the pre-System 10 behavior you must use convert: select $12*convert(money,$12*8.9e0) _CONVERSION CHANGES_ As of System 10, all conversions to character succeed only if no decimal digits are lost. In previous versions of the server, floating point to character conversions allowed some truncation without warning. Change in Money Conversion All conversions to money datatypes round to four places. When an explicit conversion of one numeric value to another results in loss of scale, the results are truncated without warning. For example, explicitly converting a float to an integer causes SQL Server to truncate all values to the right of the decimal point. Change in Integer-to-Character Conversion Conversions from integer to character now return an error if an overflow occurs. They formerly returned a buffer of "*". Change to set arithabort/arithignore The set arithabort and set arithignore commands have changed behavior in some cases. If you are using these set commands, you should test and understand the behavior. _CHANGES IN SUBQUERY PROCESSING_ Changes were made in subquery processing to support full ANSI compatibility as well as fix some bugs in the SQL Server. These changes may result in different results, as well as performance differences. These changes are detailed below. You should test subqueries used in your applications to understand the new behavior. Changes to Subqueries Using IN/ANY In pre-System 10 SQL Servers, subqueries using in or any would return duplicates if the values being select contained duplicates. For example, using the pubs database: select pub_name from publishers where pub_id in (select pub_id from titles) In pre-System 10 SQL Servers this query returned: pub_name ------------------- New Age Books New Age Books New Age Books New Age Books New Age Books Binnet & Hardley Binnet & Hardley Binnet & Hardley Binnet & Hardley Binnet & Hardley Binnet & Hardley Binnet & Hardley Algodata Infosystems Algodata Infosystems Algodata Infosystems Algodata Infosystems Algodata Infosystems Algodata Infosystems As of System 10, this query returns: pub_name ------------------- New Age Books Binnet & Hardley Algodata Infosystems Change in Evaluation of not in ANSI states that if a subquery returns a NULL, a not in should evaluate to UNKNOWN or FALSE. Here is an example, using the pubs database: select pub_id from publishers where $100.00 not in (select price from titles where titles.pub_id=publishers.pub_id) In pre-System 10 SQL Servers, this query returns: pub_id -------- 0736 0877 1389 In the System 10 SQL Server, this query returns: pub_id -------- 0736 Change in Results of Subquery with or...exists/in/any The pre-System 10 SQL Server returned the wrong results when an exists, in, or any subquery appeared under an or. Given the following tables and contents: full_table: x y empty_table: z ---- ---- ----- 5 2 with the following example query: select x from full_table where y in (select z from empty_table) or y = 2 In pre-System 10 SQL Servers, this query returns: x ----- no rows returned As of System 10, this query returns: x ----- 5 Change in Evaluation of >ALL and <ALL ANSI states that >ALL and <ALL should be TRUE when a subquery returns no rows. Here is an example against the pubs database: select title from titles where advance > all (select advance from publishers, titles where titles.pub_id = publishers.pub_id and pub_name="No Such Publisher") In pre-System 10 SQL Servers, this query returns: title --------------------------- no rows returned As of System 10, this query returns all rows in the titles table: title --------------------------- But Is It User Friendly? Computer Phobic and Non-Phobic Individuals: Behavior Variations Cooking with Computers: Surreptitious Balance Sheets Emotional Security: A New Algorithm Fifty Years in Buckingham Palace Kitchens Is Anger the Enemy? Life Without Fear Net Etiquette Onions, Leeks, and Garlic: Cooking Secrets of the Mediterranean Prolonged Data Deprivation: Four Case Studies Secrets of Silicon Valley Silicon Valley Gastronomic Treats Straight Talk About Computers Sushi, Anyone? The Busy Executive's Database Guide The Gourmet Microwave The Psychology of Computer Cooking You Can Combat Computer Stress! Change in Evaluation of Aggregates with exists In pre-System 10 SQL Servers, queries that have both aggregates and exists subqueries sometimes return the wrong answer. This happens for correlated subqueries, where there are duplicates in the subquery. Here is an example from the pubs database: select count(*) from publishers where exists (select * from titles where titles.pub_id = publishers.pub_id) In pre-System 10 SQL Servers, this query returns 18; as of System 10, this query returns 3. Change in Evaluation of in Subqueries with select distinct Prior to System 10, correlated in subqueries using distinct would cause the outer query not to return any rows. Here is an example from the pubs database: select pub_name from publishers where pub_id in (select distinct pub_id from titles where titles.pub_id = publishers.pub_id) In pre-System 10 SQL Servers, this query returns: pub_name --------------------------- no rows returned In the System 10 SQL Server, this query returns: pub_name --------------------------- New Age Books Binnet & Hardley Algodata Infosystems Change In Evaluation of between ANSI standard states that a between predicate of the form: expr1 between expr2 and expr3 is equivalent to: expr1 >= expr2 and expr1 <= expr3 The pre-System SQL SQL Server switches expr2 and expr3 automatically if it knows that expr2 > expr3 at compile time. As of System 10, SQL Server will no longer do that switch. For example: create table demo (id int) insert into demo values (250) select id from demo where id between 400 and 200 In pre-System 10 SQL Servers, this query returns: id --------------------------- 250 As of System 10, this query returns: id --------------------------- no rows returned _CHANGE IN COMMENT PROTOCOL_ ANSI comments are started with two or more consecutive hyphens (--) and are terminated by a <newline>. ANSI comments co-exist with the Transact-SQL® comments as of the System 10 SQL Server. Certain mathematical expressions will return different results in the System 10 SQL Server because of the support of ANSI comments. For example: select 5--2 In pre-System 10 Servers, this query returns 7. As of System 10, this query returns 5 because --2 is considered a comment. You can use the following query to get back the value 7 under the System 10 SQL Server: select 5-(-2) Alternately, you can construct the query with extra spaces: select 5 - -2 _CHANGE IN PERMITTED SYNTAX_ In pre-System 10 SQL Servers the following syntax was allowed: select * from table1, table1 where clause ANSI states this syntax is invalid; therefore this query will return a syntax error in System 10. The correct syntax in System 10 is: select * from table1 t1, table1 t2 where clause The following update statement also returns a syntax error in System 10: update table1 set a = a + 1 from table1 where b = 5 The correct syntax is: update table1 set a = a + 1 from table1 t1 where t1.b=5 _NO MORE NULL COLUMN HEADINGS_ Previous SQL Server releases allowed NULL column headings in tables created using select into. As of System 10, you must provide a column heading that is a valid SQL identifier. Check all applications that use select into. Examples of select list items that require column headings are: * An aggregate function, such as avg(advance) * An arithmetic expression, such as colname * 2 * String concatenation, such as au_lname + ", " + au_fname * A built-in function, such as substring(au_lname,1,5) * A constant, such as "Result" There are three ways to specify column headings: select title_id, avg_advance = avg(advance) into #tempdata from titles select title_id, avg(advance) avg_advance into #tempdata from titles select title_id, avg(advance) as avg_advance into #tempdata from titles _MORE CONSISTENCY REQUIRED FOR CORRELATION NAMES_ In pre-System 10 releases, statements that specified correlation names but did not use them consistently still returned results. The following statement now returns errors in accordance with ANSI: select title_id from titles t where titles.type = "trad_cook" The correct query is: select title_id from titles t where t.type = "trad_cook" When a subquery includes this correlation, no error is reported, but queries may return different results than a pre-System 10 Server: select * from mytable where columnA = (select min(columnB) from mytable m where mytable.columnC = 10) In pre-System 10 releases, mytable.columnC in the above subquery would have referred to the mytable in the subquery. In System 10, mytable.columnC refers to the outer table mytable. If the query needs to refer to mytable in the subquery, construct it as follows: select * from mytable where columnA = (select min(columnB) from mytable m where m.columnC = 10) _LAST MINUTE CHECKLIST_ The information in this section applies to upgrades from 10.x and earlier releases of SQL Server. Perform the tasks listed below to avoid known causes of failure. Read the Documentation Read the following documents for important information about your upgrade: * What's New in Sybase SQL Server Release 11.0? * Release Bulletin * SQL Server installation and configuration guide If you are upgrading from a pre-10.x SQL Server, the installation and upgrade utility, sybinit, will not be familiar to you. It is explained in the SQL Server installation and configuration guide. Backup All Databases Back up all databases in case of upgrade failure. A failed upgrade may corrupt your databases, so complete backups are essential. Mirror and Unmirror In addition to your backups, we recommend mirroring SQL Server if your environment permits. Restoring your databases from the mirror is much quicker than restoring from backups. Use the following syntax: disk mirror name = "device_name", mirror = "physical_name" Then, unmirror all Sybase mirrors before upgrading. Use the following syntax: disk unmirror name = "device_name", mode = remove See the SQL Server System Administration Guide for instructions on mirroring. See the SQL Server Reference Manual for information about the disk mirror, unmirror, and remirror commands. Increase SQL Server Memory Various changes in SQL Server 11.0 have increased its memory requirements. These changes include: * New user log cache * Larger dataserver binary * Previously existing structures which are now larger (such as locks) We recommend the following approach to adjusting your memory for upgrade: 1. Before making any upgrade-related changes, look at the error log to get a profile of the memory usage on your production SQL Server. Look for messages that give the buffer cache, procedure cache, and procedure header sizes. Record these values to use after the upgrade. 2. Next, to ensure that you have enough memory to successfully complete the upgrade to 11.0, use sp_configure to change the following configuration parameters to their default values: Configuration Parameter Default Value user connections 25 locks 500 open objects 5000 memory at least 7680 (15 MB) Use commands like the following to change the parameters: sp_configure "user connections", 25 3. After making these changes to maximize available memory, check the error log again and record how much cache you have now. You will use this information to configure your cache after the upgrade. 4. If you are upgrading from a pre-10.x SQL Server and your user databases have a lot of stored procedures, triggers, views, or rules, you may need to increase the stack size because remapping stored procedures during upgrade requires extra stack space. Do the following: + Record your current stack size. + Increase stack size. For example, to increase stack size to 200K, enter: sp_configure 'stack size', 204800 5. After the upgrade, perform the tasks described in ``Important Steps After the Upgrade'' below to return your memory to its normal configuration. See the "How SQL Server Uses Memory" in the SQL Server Troubleshooting Guide for a discussion of tools for assessing memory usage. See the System Administration Guide and the Performance and Tuning Guide for more information about SQL Server memory usage. _ADDITIONAL PRE-UPGRADE TASKS_ Perform the following additional tasks before beginning your upgrade: * If you are upgrading from a pre-10.x SQL Server, check that you have extra space in all your databases for stored procedures and other objects. Compiled objects, such as stored procedures, triggers and rules, will be remapped and will take up more disk space than previously. * If you are upgrading from release 10.x, disable replication. If replication is enabled and your log has not been cleared, the upgrade will fail. The Release Bulletin details the steps necessary to disable replication. See also the Replication Server Commands Reference. * Run the following dbccs on all databases: checkdb, checkalloc, and checkcatalog. See the SQL Server Reference Manual for instructions on running dbccs. * Truncate transaction logs on all databases before running sybinit. This is a precaution against failure. * Verify that you have enough space for system tables. If you are upgrading from SQL Server 10.x, slightly more disk space is required for the creation of new system tables. Use the "Test upgrade eligibility" option in sybinit to check that you have enough disk space. Follow the instructions in the SQL Server installation and configuration guide. * Turn off all options on your all databases, with the exception of tempdb, using sp_dboption. The upgrade requires that you set select into/bulk copy to true for tempdb. To turn off an option, use this syntax: sp_dboption database_name, "option", false To set select into/bulk copy for tempdb, use the following command: sp_dboption tempdb, "select into/bulkcopy", true See the SQL Server Reference Manual for information on sp_dboption. * If you are upgrading from a pre-10.x SQL Server, check that you have enough devices configured to create the sybsystemprocs device. For example, if you have 10 devices configured, and all of them are already in use, increase devices by 1 as follows: sp_configure devices, 11 See the SQL Server Reference Manual for information on sp_configure. * Be sure SQL Server is running and all users are off before beginning the upgrade. _IMPORTANT STEPS AFTER THE UPGRADE_ After performing the upgrade according to instructions in the SQL Server Installation and Configuration Guide, be sure to perform all the tasks in the following sections. Reconfigure Memory for Normal Usage Follow these steps to return your memory parameters to their normal configurations. Use the record of memory usage you created in the section "Increase SQL Server Memory". 1. Check the error log to see how much memory is left in cache. By subtracting this number from the amount you recorded in Step 3 of ``Increase SQL Server Memory'', you can determine how much additional memory is required by the 11.x SQL Server. 2. Using the baseline information you obtained in Step 1 of ``Increase SQL Server Memory'', reconfigure the memory parameter as needed to support your usual number of user connections, open objects, and locks. 3. Use the sp_configure command to reconfigure the resources you changed in Step 2 back to the usual values needed for your production environment. 4. Reset your stack size to the size you recorded in Step 4 of ``Increase SQL Server Memory''. Perform Backup Tasks Perform the following tasks to back up your databases: * Install Backup Server. If you upgraded from 10.x and you want to keep the same port number for your 11.0 Backup Server, shut down the 10.x Backup Server before you start the 11.0 Backup Server. See the SQL Server installation and configuration guide for complete instructions on installing Backup Server. * After you install Backup Server, verify that SQL Server's default Backup Server name matches the name you gave Backup Server when you installed it. If you performed an upgrade from a pre-10.0 SQL Server, you must be especially careful that SQL Server knows the name you gave Backup Server. SQL Server needs the name in order to perform dumps and loads. The upgrade program automatically sets SQL Server's default Backup Server to SYB_BACKUP. However, you may have given your Backup Server a different name when you installed it. If so, use sybinit to reconfigure SQL Server and enter the correct Backup Server name as the default. Follow the instructions in the SQL Server Installation and Configuration Guide for installing a Backup Server and configuring an existing SQL Server. * Backup all databases immediately so that your dumps are at the right release level. If necessary, you can load 10.x dumps to a release 11.0 SQL Server, but you cannot load 4.9.x or 4.2 dumps. _________________________________________________________________ Disclaimer: No express or implied warranty is made by Sybase or its subsidiaries with regard to any recommendations or information presented in SYBASE Technical News. Sybase and its subsidiaries hereby disclaim any and all such warranties, including without limitation any implied warranty of merchantability of fitness for a particular purpose. In no event will Sybase or its subsidiaries be liable for damages of any kind resulting from use of any recommendations or information provided herein, including without limitation loss of profits, loss or inaccuracy of data, or indirect special incidental or consequential damages. Each user assumes the entire risk of acting on or utilizing any item herein including the entire cost of all necessary remedies. _STAFF_ Principal Editor: Leigh Ann Hussey Contributing Writers: Kathy Saunders, Cris Gutierrez Send comments and suggestions to: SYBASE Technical News 6475 Christie Avenue Emeryville, CA 94608 or send mail to technews@sybase.com Copyright 1996 © Sybase, Inc. All Rights Reserved. -- Pablo Sanchez | Ph # (415) 933.3812 Fax # (415) 933.2821 pablo@sgi.com | Pg # (800) 930.5635 -or- pablo_p@corp.sgi.com =============================================================================== I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ] ---------------------------------------------------------------------- Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!harbinger.cc.monash.edu.au!news.mira.net.au!Germany.EU.net!howland.reston.ans.net!swrinde!hookup!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo From: pablo@sgi.com (Pablo Sanchez) Newsgroups: comp.databases.sybase,comp.answers,news.answers Subject: Sybase FAQ: 7/8 - section 6 Supersedes: <FAQ.section_6_833665144@sgi.com> Followup-To: comp.databases.sybase Date: 1 Jul 1996 14:32:24 GMT Organization: Silicon Graphics, Inc. Nederland, CO. USA Lines: 12 Approved: news-answers-request@MIT.EDU Message-ID: <FAQ.section_6_836231368@sgi.com> References: <FAQ.section_5_836231368@sgi.com> Reply-To: pablo@sgi.com NNTP-Posting-Host: mew.corp.sgi.com Summary: Info about SQL Server, bcp, isql and other goodies Posting-Frequency: monthly Originator: pablo@mew.corp.sgi.com Xref: news1.ucsd.edu comp.databases.sybase:29423 comp.answers:15545 news.answers:62013 Archive-name: databases/sybase-faq/part7 URL: http://reality.sgi.com/pablo/Sybase_FAQ _________________________________________________________________ section 6 is in the FAQ index _________________________________________________________________ -- Pablo Sanchez | Ph # (415) 933.3812 Fax # (415) 933.2821 pablo@sgi.com | Pg # (800) 930.5635 -or- pablo_p@corp.sgi.com =============================================================================== I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ] ---------------------------------------------------------------------- Path: news1.ucsd.edu!ihnp4.ucsd.edu!munnari.OZ.AU!harbinger.cc.monash.edu.au!nntp.coast.net!chi-news.cic.net!hookup!news.mathworks.com!enews.sgi.com!news.corp.sgi.com!mew.corp.sgi.com!pablo From: pablo@sgi.com (Pablo Sanchez) Newsgroups: comp.databases.sybase,comp.answers,news.answers Subject: Sybase FAQ: 8/8 - section 7 Supersedes: <FAQ.section_7_833665144@sgi.com> Followup-To: comp.databases.sybase Date: 1 Jul 1996 14:32:28 GMT Organization: Silicon Graphics, Inc. Nederland, CO. USA Lines: 224 Approved: news-answers-request@MIT.EDU Message-ID: <FAQ.section_7_836231368@sgi.com> References: <FAQ.section_6_836231368@sgi.com> Reply-To: pablo@sgi.com NNTP-Posting-Host: mew.corp.sgi.com Summary: Info about SQL Server, bcp, isql and other goodies Posting-Frequency: monthly Originator: pablo@mew.corp.sgi.com Xref: news1.ucsd.edu comp.databases.sybase:29420 comp.answers:15543 news.answers:62011 Archive-name: databases/sybase-faq/part8 URL: http://reality.sgi.com/pablo/Sybase_FAQ Q7.1: SYBASE IQ _________________________________________________________________ Sybase IQ isn't meant as just an indexing scheme, per se. It is meant as a means of providing a low cost data warehousing solution for unplanned queries. By the way, Sybase IQ does not use bitmapped indexes, it uses bitwise indexes, which are quite different. In data warehousing MIS generally does not know what the queries are. That also means that the end users often don't know what the queries are. Not knowing what the queries are turning end users loose on a 500GB operational database to perform huge queries could prove to be unacceptable (it may bring the system down a crawl). So, many customers are resorting to separating their operational databases (OLTP) and data warehousing databases. By providing this separation the operational database can continue about its business and the data warehouse users can issue blind queries without affecting the operational systems. Realize that operational systems may handle anywhere from hundreds to a few thousand users and, more likely than not, require data that is highly accurate. However, data warehouse users often don't require up to the second information and can often wait several hours, 24 hours or even days for the most current snapshot and generally don't require updates to be made to the data. So, Sybase IQ can be updated a few times a day, once a day or a few times a week. Realize that Sybase IQ is _strictly_ a data warehousing solution. It is not meant for OLTP systems. Sybase IQ can also sit on top of Sybase SQL Server: [end user] | | [Sybase IQ] [Sybase SQL Server] What happens in this environment is that a data warehouse user can connect to Sybase IQ. Sybase IQ will then take care of processing the query or forwarding the query to SQL Server if it determines that the access paths in SQL Server are faster. An example where SQL Server will be faster than Sybase IQ in queries is when SQL Server can perform query coverate with the indexes built in SQL Server. The obvious question is: why not index every column in SQL Server? Because it would be prohibitive to update any of the data. Hence, Sybase IQ, where all the columns are making use of the bitwise index scheme. By the way, you can choose which columns will be part of an IQ implementation. So, you may choose to have only 30% of your columns as part of your Sybase IQ implementation. Again, I can't stress enough that Sybase IQ is _strictly_ for data warehousing solutions, not OLTP solutions. _________________________________________________________________ Q7.2: NET BOOK REVIEW _________________________________________________________________ * Sybase * Sybase Architecture and Administration * Developing Sybase Applications * Sybase Developer's Guide * Sybase DBA Survival Guide * Guide to SQL Server * Client/Server Development with Sybase * Physical Database Design for Sybase SQL Server * High Performance Relational Database Design Sybase - McGoveran and Date ISBN: 0-201-55710-X Published by Addison-Wesley. 450 pages. I think that once, not too long ago, this used to be the only book on Sybase available. Now it seems to be totally out of print! It covered versions of Sybase SQL server up to 4.8. It covered a number of aspects of Sybase, including APT. Sybase Architecture and Administration - Kirkwood ISBN: 0-13-100330-5 Published by Ellis Horwood. 404 pages. This is a good book covering Sybase systems up to and including System 10. It deals to a good depth the architecture and how most of the functions such as the optimiser work. It explains in a readable style how devices work, and how indexes are stored and manipulated. Developing Sybase Applications - Worden ISBN: 0-672-30700-6 Published by SAMS. ??? pages. (Inc CD.) This books seems very similar to number 4 to me and so I have not bought it. I have browsed through several times in the book shop, and decided that his other book covers a good deal of this. There are chapters on Visual Basic and Powerbuilder. Sybase Developer's Guide - Worden ISBN: 0-672-30467-8 Published by SAMS. 698 pages. (Inc disk.) This is a big book that does not, in my opinion, cover very much. In fact the disk that is included contains DBATools, and that seems to sum up the first 50% of the book. There is a fair amount of coverage of the general architecture and how to install Sybase. Transact SQL, cursors and stored procedures get a fair covering, as does using C/C++ with DB-Library. (I can find no mention of CT-Library.) Unfortunately quite a lot of the book covers general issues which are not covered in sufficient depth to be useful, and just seem to be there to give the book bulk. Maybe as a developer's guide, his other book would be a better buy. This would probably be most useful to a small company implementing a Sybase database. Sybase DBA Survival Guide - Jeff Garbus, David Solomon, Brian Tretter ISBN: 0-672-30651-4 Published by SAMS. 506 pages. (Inc disk.) This book is good, and is a great help in a crisis. It includes lots of useful ideas and strategies for most (if not all) of the DBA tasks. It covers Sybase SQL Server on all platforms. It does not specifically cover any of the Microsoft versions, and certainly not version 6. It does cover System 10. It is very good at explaining the output from things like the DBCC commands. There is also a good section on what to look for in the errorlog. If you are a DBA and want to buy just one book, I would recommend this one since it covers just about everything you will need to know. This book is filled with little hints, tips and warnings which are _very_ useful. They have certainly saved my bacon on a number of occasions, and have even made me look a real star more than once. Guide to SQL Server - Aloke Nath ISBN: 0-201-62631-4 Published by Addison-Wesley. 567 pages. This book is solely about MS SQL Server, covering 4.2 for OS/2 and SQL Server NT. It is not bad, but does seem to regurgitate a lot from the Sybase [sic] manuals. Its coverage is fairly broad dealing with Transact SQL on the one hand through to client configuration on the other. It does cover the aspects of MS Sqlserver that are different from Sybase, (dbcc perfmon for instance) but it does not flag any as such. Probably a good buy if you only have MS Sqlserver and never intend looking at Sybase. Client/Server Development with Sybase - Alex Berson and George Anderson, ISBN: 0-07-005203-4 Published by McGraw-Hill. 743 pages. I have used this book as a reference when system manuals where not available. It is much more useful on how thing work and what approach to use rather than syntax. The breadth of topics pleased me - all the right jargon is mentioned. The introduction mentions CORBA and DCE. Sybase RPC is compared to UNIX RPCs. Middle ware products are discussed. Talks with our sales rep. about the OMNI and NetGateway product where greatly assisted by using the diagrams in the Open Server and Gateways chapter. Like any text, it is dated (as it is printed). The Netgateway diagram does not show a TCP/IP interface to MVS. However, the information provided is not really diminished. This goes back to the fact that this is a How Things Work and How to Use Things book, not a compilation of details on a single version. Physical Database Design for Sybase SQL - Rob Gillette, Dean Meunch, Jean Tabaka ISBN: 0-13-161523-8 Published by Prentice-Hall. 225 pages. Supposedly the first in a series from Sybase Professional Services, espousing the Sybase Development Framework or SDF (tm). I've seen no more books, and have never heard any more about SDF. This book is a reasonable attempt to guide developers through the process of turning a logical database design into a physical Sybase implementation. Topics include: * Defining Tables and Columns * Defining Keys * Identifying Critical Transactions * Adding Redundant Columns * Adding Derived Columns * Collapsing Tables * Splitting Tables * Handling Supertypes and Subtypes * Duplicating Parts of Tables * Adding Tables for derived Data * Handling Vector Data * Generating Sequence Numbers * Specifying Indexes * Maintaining Row Uniqueness * Handling Domain Restrictions * Handling Referential Integrity * Maintaining Derived and Redundant data * Handling Complex Integrity Constraints * Controlling Access to Data * Managing Object Sizes * Recommending Object Placement * Required Inputs to Physical DB Design * Naming Guidelines Covers System 10. Lots of good practical hints and guidelines on database design. In the absence of any competition - a definite recommendation for newcomers to Sybase database design. High Performance Relational Database Design - Kirkwood ISBN: 0-13-030198-1 Published by Ellis Horwood. 263 pages. This book deserves a mention, partly because it is very good and partly because it does have a chapter about Sybase optimisation. It is a very good book and should be a _must_ for all DBAs and anyone designing and building database applications. _________________________________________________________________ -- Pablo Sanchez | Ph # (415) 933.3812 Fax # (415) 933.2821 pablo@sgi.com | Pg # (800) 930.5635 -or- pablo_p@corp.sgi.com =============================================================================== I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ]