DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH PRINT BOOK
 

(mysql.info.gz) InnoDB restrictions

Info Catalog (mysql.info.gz) InnoDB Error handling (mysql.info.gz) InnoDB (mysql.info.gz) InnoDB troubleshooting
 
 15.17 Restrictions on `InnoDB' Tables
 =====================================
 
    * A table cannot contain more than 1000 columns.
 
    * The internal maximum key length is 3500 bytes, but MySQL itself
      restricts this to 1024 bytes.
 
    * The maximum row length, except for `BLOB' and `TEXT' columns, is
      slightly less than half of a database page. That is, the maximum
      row length is about 8000 bytes. `LONGBLOB' and `LONGTEXT' columns
      must be less than 4GB, and the total row length, including also
      `BLOB' and `TEXT' columns, must be less than 4GB.  `InnoDB' stores
      the first 512 bytes of a `BLOB' or `TEXT' column in the row, and
      the rest into separate pages.
 
    * On some old operating systems, data files must be less than 2GB.
 
    * The combined size of the `InnoDB' log files must be less than 4GB.
 
    * The minimum tablespace size is 10MB.  The maximum tablespace size
      is four billion database pages (64TB).  This is also the maximum
      size for a table.
 
    * `InnoDB' tables do not support `FULLTEXT' indexes.
 
    * `InnoDB' tables do not support spatial column types.
 
    * `ANALYZE TABLE' counts `cardinality' by doing 10 random dives to
      each of the index trees and updating index cardinality estimates
      accordingly.  Note that because these are only estimates, repeated
      runs of `ANALYZE TABLE' may produce different numbers.  This makes
      `ANALYZE TABLE' fast on `InnoDB' tables but not 100% accurate as
      it doesn't take all rows into account.
 
      MySQL uses index cardinality estimates only in join optimization.
      If some join is not optimized in the right way, you may try using
      `ANALYZE TABLE'. In the few cases that `ANALYZE TABLE' doesn't
      produce values good enough for your particular tables, you can use
      `FORCE INDEX' with your queries to force the usage of a particular
      index, or set `max_seeks_for_key' to ensure that MySQL prefers
      index lookups over table scans.  Server system variables.
       Optimizer Issues.
 
    * On Windows, `InnoDB' always stores database and table names
      internally in lowercase. To move databases in binary format from
      Unix to Windows or from Windows to Unix, you should have all
      database and table names in lowercase.
 
    * *Warning:* Do _not_ convert MySQL system tables in the `mysql'
      database from `MyISAM' to `InnoDB' tables!  This is an unsupported
      operation.  If you do this, MySQL will not restart until you
      restore the old system tables from a backup or re-generate them
      with the `mysql_install_db' script.
 
    * `InnoDB' does not keep an internal count of rows in a table. (This
      would actually be somewhat complicated because of
      multi-versioning.)  To process a `SELECT COUNT(*) FROM T'
      statement, `InnoDB' must scan an index of the table, which will
      take some time if the index is not entirely in the buffer pool. To
      get a fast count, you have to use a counter table you create
      yourself and let your application update it according to the
      inserts and deletes it does. If your table does not change often,
      using the MySQL query cache is a good solution.  `SHOW TABLE
      STATUS' also can be used if an approximate row count is sufficient.
       InnoDB tuning.
 
    * For an `AUTO_INCREMENT' column, you must always define an index for
      the table, and that index must contain just the `AUTO_INCREMENT'
      column. In `MyISAM' tables, the `AUTO_INCREMENT' column may be
      part of a multi-column index.
 
    * `InnoDB' does not support the `AUTO_INCREMENT' table option for
      setting the initial sequence value in a `CREATE TABLE' or `ALTER
      TABLE' statement.  To set the value with `InnoDB', insert a dummy
      row with a value one less and delete that dummy row, or insert the
      first row with an explicit value specified.
 
    * When you restart the MySQL server, `InnoDB' may reuse an old value
      for an `AUTO_INCREMENT' column (that is, a value that was assigned
      to an old transaction that was rolled back).
 
    * When an `AUTO_INCREMENT' column runs out of values, `InnoDB' wraps
      a `BIGINT' to `-9223372036854775808' and `BIGINT UNSIGNED' to `1'.
      However, `BIGINT' values have 64 bits, so do note that if you were
      to insert one million rows per second, it would still take nearly
      three hundred thousand years before `BIGINT' reached its upper
      bound.  With all other integer type columns, a duplicate-key error
      will result.  This is similar to how `MyISAM' works, because it is
      mostly general MySQL behavior and not about any storage engine in
      particular.
 
    * `DELETE FROM TBL_NAME' does not regenerate the table but instead
      deletes all rows, one by one.
 
    * `TRUNCATE TBL_NAME' is mapped to `DELETE FROM TBL_NAME' for
      `InnoDB' and doesn't reset the `AUTO_INCREMENT' counter.
 
    * `SHOW TABLE STATUS' does not give accurate statistics on `InnoDB'
      tables, except for the physical size reserved by the table.  The
      row count is only a rough estimate used in SQL optimization.
 
    * If you try to create a unique index on a prefix of a column you
      will get an error:
 
           CREATE TABLE T (A CHAR(20), B INT, UNIQUE (A(5))) TYPE = InnoDB;
 
      If you create a non-unique index on a prefix of a column, `InnoDB'
      will create an index over the whole column.
 
    * Before MySQL 4.0.20 or 4.1.2, the MySQL `LOCK TABLES' operation
      does not know about `InnoDB' row-level locks set by completed SQL
      statements. This means that you can get a table lock on a table
      even if there still exist transactions by other users who have row
      level locks on the same table. Thus your operations on the table
      may have to wait if they collide with these locks of other users.
      Also a deadlock is possible. However, this does not endanger
      transaction integrity, because the row level locks set by `InnoDB'
      will always take care of the integrity.  Also, a table lock
      prevents other transactions from acquiring more row level locks
      (in a conflicting lock mode) on the table.
 
    * Beginning with MySQL 4.0.20 and 4.1.2, the MySQL `LOCK TABLES'
      operation acquires two locks on each table if
      `innodb_table_locks=1'. (1 is the default.)  In addition to a
      table lock on the MySQL layer, it will also acquire an `InnoDB'
      table lock.  Older versions of MySQL do not acquire `InnoDB' table
      locks.  Beginning with MySQL 4.0.22 and 4.1.7, the old behavior
      can be selected by setting `innodb_table_locks=0'.  If no `InnoDB'
      table lock is acquired, `LOCK TABLES' will complete even if some
      records of the tables are being locked by other transactions.
 
    * All `InnoDB' locks held by a transaction will be released when the
      transaction is committed or aborted.  Thus, it does not make much
      sense to invoke `LOCK TABLES' on `InnoDB' tables in `AUTOCOMMIT=1'
      mode, because the acquired `InnoDB' table locks would be released
      immediately.
 
    * Sometimes it would be useful to lock further tables in the course
      of a transaction.  Unfortunately, `LOCK TABLES' in MySQL performs
      an implicit `COMMIT' and `UNLOCK TABLES'.  An InnoDB variant of
      `LOCK TABLES' has been planned that can be executed in the middle
      of a transaction.
 
    * Before MySQL 3.23.52, replication always ran with autocommit
      enabled. Therefore consistent reads in the slave would also see
      partially processed transactions, and thus the read would not be
      really consistent in the slave. This restriction was removed in
      MySQL 3.23.52.
 
    * The `LOAD TABLE FROM MASTER' statement for setting up replication
      slave servers does not yet work for `InnoDB' tables. A workaround
      is to alter the table to `MyISAM' on the master, do then the load,
      and after that alter the master table back to `InnoDB'.
 
    * The default database page size in `InnoDB' is 16KB. By recompiling
      the code, you can set it to values ranging from 8KB to 64KB.  You
      have to update the values of `UNIV_PAGE_SIZE' and
      `UNIV_PAGE_SIZE_SHIFT' in the `univ.i' source file.
 
 
Info Catalog (mysql.info.gz) InnoDB Error handling (mysql.info.gz) InnoDB (mysql.info.gz) InnoDB troubleshooting
automatically generated byinfo2html