summaryrefslogtreecommitdiff
path: root/sql-bench/limits
diff options
context:
space:
mode:
authormonty@donna.mysql.com <>2000-08-18 12:48:00 +0300
committermonty@donna.mysql.com <>2000-08-18 12:48:00 +0300
commita753a3a2ce252a3972cfcd47cf27c689de61b602 (patch)
treedc12a8a920e65278811e12ea88c8d7d24390be23 /sql-bench/limits
parent29456f6e1c9c3fa0abf58022cfe6045509b9f7c3 (diff)
downloadmariadb-git-a753a3a2ce252a3972cfcd47cf27c689de61b602.tar.gz
Updated benchmark and results for PostgreSQL 7.0.2
Added more status to the MyISAM files to avoid checking files that has already been checked.
Diffstat (limited to 'sql-bench/limits')
-rw-r--r--sql-bench/limits/Adabas.comment36
-rw-r--r--sql-bench/limits/Informix.comment26
-rw-r--r--sql-bench/limits/access.comment40
-rw-r--r--sql-bench/limits/empress.comment102
-rw-r--r--sql-bench/limits/pg.comment30
5 files changed, 0 insertions, 234 deletions
diff --git a/sql-bench/limits/Adabas.comment b/sql-bench/limits/Adabas.comment
deleted file mode 100644
index d36d05047cc..00000000000
--- a/sql-bench/limits/Adabas.comment
+++ /dev/null
@@ -1,36 +0,0 @@
-
-I did not spend much time for tuning crash-me or the limits file. In short,
-here's what I did:
-
- - Put engine into ANSI SQL mode by using the following odbc.ini:
-
- [ODBC Data Sources]
- test
-
- [test]
- ServerDB=test
- ServerNode=
- SQLMode=3
-
- - Grabbed the db_Oracle package and copied it to db_Adabas
- - Implemented a 'version' method.
- - Ran crash-me with the --restart option; it failed when guessing the
- query_size.
- - Reran crash-me 3 or 4 times until it succeeded. At some point it
- justified its name; I had to restart the Adabas server in the
- table name length test ...
- - Finally crash-me succeeded.
-
-That's it, folks. The benchmarks have been running on my P90 machine,
-32 MB RAM, with Red Hat Linux 5.0 (Kernel 2.0.33, glibc-2.0.7-6).
-Mysql was version 3.21.30, Adabas was version 6.1.15.42 (the one from
-the promotion CD of 1997). I was using X11 and Emacs while benchmarking.
-
-An interesting note: The mysql server had 4 processes, the three usual
-ones and a process for serving me, each about 2 MB RAM, including a
-shared memory segment of about 900K. Adabas had 10 processes running from
-the start, each about 16-20 MB, including a shared segment of 1-5 MB. You
-guess which one I prefer ... :-)
-
-
-Jochen Wiedmann, joe@ispsoft.de
diff --git a/sql-bench/limits/Informix.comment b/sql-bench/limits/Informix.comment
deleted file mode 100644
index 557db012dd8..00000000000
--- a/sql-bench/limits/Informix.comment
+++ /dev/null
@@ -1,26 +0,0 @@
-*****************************************************************
-NOTE:
-I, Monty, pulled this comment out from the public mail I got from
-Honza when he published the first crash-me run on Informix
-*****************************************************************
-
-Also attached are diffs from server-cfg and crash-me -- some of
-them are actual bugs in the code, some add extensions for Informix,
-some of the comment-outs were necessary to finish the test. Some of
-the problematic pieces that are commented out sent Informix to
-veeeery long load 1 on the machine (max_conditions for example), so
-could be considered crashes, but I'd prefer that someone checks the
-code before giving out such a conclusion.
-
-Some of the code that is commented out failed with some other SQL
-error message which might mean a problem with the sequence of commands
-in crash-me. Interesting thing, some of the tests failed for the
-first time but in the next or third run went OK, so the results are
-results of more iterations (like column doesn't exist in the first
-try but the second pass goes OK).
-
-I'd like to hear your comments on the bug fixes and Informix specific
-code before we go into debugging the problems.
-
-Yours,
- Honza Pazdziora
diff --git a/sql-bench/limits/access.comment b/sql-bench/limits/access.comment
deleted file mode 100644
index f4a419aa159..00000000000
--- a/sql-bench/limits/access.comment
+++ /dev/null
@@ -1,40 +0,0 @@
-Access 97 tested through ODBC 1998.04.19, by monty@mysql.com
-
-Access 97 has a bug when on executes a SELECT follwed very fast with a
-DROP TABLE or a DROP INDEX command:
-
-[Microsoft][ODBC Microsoft Access 97 Driver] The database engine couldn't lock table 'crash_q' because it's already in use by another person or process. (SQL-S1
-000)(DBD: st_execute/SQLExecute err=-1)
-
-Debugging SQL queries in Access 97 is terrible because most error messages
-are of type:
-
-Error: [Microsoft][ODBC Microsoft Access 97 Driver] Syntax error in CREATE TABLE statement. (SQL-37000)(DBD: st_prepare/SQLPrepare err=-1)
-
-Which doesn't tell a thing!
-
---------------
-
-Access 2000 tested through ODBC 2000.01.02, by monty@mysql.com
-
-crash-me takes a LONG time to run under Access 2000.
-
-The '1+NULL' and the 'OR and AND in WHERE' tests kills
-Activestate Perl, build 521, DBI-DBC with an OUT OF MEMORY error.
-The later test also kills perl/access with some internal errors.
-To go around this one must run crash-me repeatedly with the --restart option.
-
-Testing of the 'constant string size' (< 500K) takes a LOT of memory
-in Access (at least 250M on My computer).
-
-Testing of number of 'simple expressions' takes REALLY a lot of time
-and memory; At some point I was up to 350M of used memory!
-
-To fix the above, I modified crash-me to have lower max limits in the
-above tests.
-
-Benchmarks (under Win98):
-
-Running the connect-test will take up all available memory and this
-will not be freed even after quitting perl! There is probably some
-bug in the Access connect code that eats memory!
diff --git a/sql-bench/limits/empress.comment b/sql-bench/limits/empress.comment
deleted file mode 100644
index b60bf4f19a9..00000000000
--- a/sql-bench/limits/empress.comment
+++ /dev/null
@@ -1,102 +0,0 @@
-*****************************************************************
-NOTE:
-This is an old comment about how it was to run crash-me on empress
-the first time. I think it was on Empress 6.0
-*****************************************************************
-
-start testing empress ...
-added a nice line for the max join ....
-strip the as out of the from field ...
-that's working on empress ....
-
-at this moment with ....
-max constant string size in where .... taking a lot of memory ...
-at this moment (it's still growing just waiting till it stops ..) 99mb ..
-sorry it started growing again ...
-max 170 mb ... then it gives an error ...
-Yes it crashed .....
-at max constant string size in where ... with IOT trap/Abort(core dumped) :-)
-nice isn't it ... hope it saved the things ....
-I outcommented the sig story because I could see how the script is running
-and I wasn't sure if SIG{PIPE} ='DEFAULT' ... is working ...
-restarting with limit 8333xxx ... couldn't see it any more ...
-query is printed ...(200000 lines ..). mmm Nice IOT trap/Abort ...
-and again ..and again ...
-aha ... and now it's going further ...
-max constant string string size in select: ...
-taking 100 mb
-crashing over and over again ....
-max simple expressions ...
-is taking ... 82 mb ...
-mmmm this is taking very very very long .... after 10 minutes I will kill it and run it again ... I think he can't proces this query that fast ... and will crash any way ...
-still growing very slow to the 90 mb ...
-killed it ... strange is ... it don't react on ctrl-c ... but kill 15 does work
-mmm still bussy with killing his self ... memory is growing to 128 mb ...
-sorry .. 150 mb .. and then the output ..
-maybe something for the extra things for crash-me ...
-if debug ....
-if length $query > 300 ... just print $errstr .. else print $query + $errstr ..
-at this moment he is still bussy printing ....
-first clear all locks ... with empadm test lockclear ... else it will give me
-the error with a lock ...
-restarting at 4194297 .... mmm a bit high I think ...
-after 5 minutes I will kill it ...
-mmm have to kill it again ... took 30 mb ..now growing to 42 mb ..
-restarting at 838859 ... hope this will crash normaly ... :-)
-I will give it again 5 minutes to complete ...
-taking 12 mb .... will kill it ... after 4 minutes ....
-restarting at 167771 ... taking 6 mb ... give it again 5 minutes ....
- will kill it again ... else it becomes to late tonight ...
-mmm started with 33xxxx and it crashes ...:-) yes ...
-can't we build in a function which will restart his self again ...
-mmmm this is really boring .. start it over and over again ...
-WHO .... NICE >>>>
-Restarting this with high limit: 4097
-.................
-*** Program Bug *** setexpr: unknown EXPR = 1254 (4e6)
-isn't it ... starting it again ...
-finally finished with 4092 ....
-now max big expression .....
-directly taking .. 85 mb ... give it again 5 minutes ...
-mmm I am going to kill it again ... mmm it grows to 146 mb ...
-restarting with 1026 ... taking 25 mb ..
-won't give him that long ... because it will crash any way (just a ques) ..
-killed it ...
-restarting at 205 ... hope this will work ....
-won't think so ... give it 2 minutes ... taking 12 mb ...
-killed it ...restarting at ... 40 ... yes it crashes ...
- 7 is crashing ... 1 ....is good .. finaly ... a long way ...
-now max stacked expressions ....
-taking 80 mb ... mmmm what sort of test is this ...it looks more like a harddisk test .. but it crashes .. nice ...
-mmm a YACC overflow ... that's a nice error ...
-but it goes on ... yep it didn't crashed just an error ...
- mmm
-my patch for the join didn't work ... let's take a look what goes wrong ...
-saw it ... forgot some little thing .. mm not .. them ... another little typo
-mmm again a really nice bug ...
-Restarting this with high limit: 131
-...
-*** Program Bug *** xflkadd: too many read locks
-them the lock forgotten ....
-mmmm bigger problem ...
-with empadm test lockinfo ... gives ...
-*** System Problem *** no more clients can be registered in coordinator
-
-*** User Error *** '/usr/local/empress/rdbms/bin/test' is not a valid database
-that's really really nice ....
-hmmm after coordclear ... it's fine again ...
-strange ...
- after restarting it again the script ... it is going further ....
-the overflow trick is nice and working good ...
-now I have table 'crash_q' does not exist for every thing ...
-normal ...???? mmm went after all good .. so I think it's normal ...
-mmmm a lot of table 'crash_q' does not exist ... again ...
-sometimes when the overflow is there ... I restart it and it is saying ...
-restarting at xxxx that's not good ... but hey ... what the hack ...
-maybe that's good because if one test run's more then 200 times ....
-it won't exceeds that test ...
-....
-yes finally the end of crash-me ...
-at last ... crash-me safe: yes ...
-yep don't think so he ....
-
diff --git a/sql-bench/limits/pg.comment b/sql-bench/limits/pg.comment
deleted file mode 100644
index c693817cc91..00000000000
--- a/sql-bench/limits/pg.comment
+++ /dev/null
@@ -1,30 +0,0 @@
-*****************************************************************
-NOTE:
-This is an old comment about how it was to run crash-me on postgreSQL
-the first time. I think it was on pg 6.2
-*****************************************************************
-
-mmm memory use of postgres is very very much ...
-at this moment I am testing it ...
-and the tables in join: is taking 200MB memory ...
-I am happy to have 400mb swap ... so he can do have it ...
-but other programs will give some errors ...
-just a second ago ... vim core dumped .. XFree crashed full ... to the prompt
-the menu bar of redhat disappeared ....
-at this momemt the max is 215 mb memore postgres is taking ...
-
-the problem with postgres is the following error:
-PQexec() -- Request was sent to backend, but backend closed the channel before r
-esponding. This probably means the backend terminated abnormally before or whil
-e processing the request
-
-I think we can solve this with a goto command ... to go back again ... after
-the connect again ...
-postgres is taking 377 mb .... mmm allmost out of memory ... 53mb left ..
-mmm it's growing ... 389 mb ..393 mb ... 397 mb .. better can wait for the out of memory ... i think 409 412 max ...
-
-ps added some nice code for the channel closing ...
-it must now do again the query when the error is the above error ...
-hopes this helps ...
-after crashing my X again ...
-I stopped testing postgres