搜索
首页数据库mysql教程MySQL Cluster 7.3.6 Released_MySQL

MySQL Cluster Logo

The binary and source versions of MySQL Cluster 7.3.6 have now been made available at http://www.mysql.com/downloads/cluster/ .

Release notes

MySQL Cluster NDB 7.3.6 is a new release of MySQL Cluster, based

on MySQL Server 5.6 and including features from version 7.3 of the

NDB storage engine, as well as fixing a number of recently

discovered bugs in previous MySQL Cluster releases.

Obtaining MySQL Cluster NDB 7.3. MySQL Cluster NDB 7.3 source

code and binaries can be obtained from

http://dev.mysql.com/downloads/cluster/ .

For an overview of changes made in MySQL Cluster NDB 7.3, see

MySQL Cluster Development in MySQL Cluster NDB 7.3

( http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-development-5-6-ndb-7-3.html ).

This release also incorporates all bugfixes and changes made in

previous MySQL Cluster releases, as well as all bugfixes and

feature changes which were added in mainline MySQL 5.6 through

MySQL 5.6.19 (see Changes in MySQL 5.6.19 (2014-05-30)

( http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-19.html )).

Functionality Added or Changed

  • Cluster API: Added as an aid to debugging the ability to

    specify a human-readable name for a given Ndb object and later

    to retrieve it. These operations are implemented,

    respectively, as the setNdbObjectName() and getNdbObjectName()

    methods.

    To make tracing of event handling between a user application

    and NDB easier, you can use the reference (from getReference()

    followed by the name (if provided) in printouts; the reference

    ties together the application Ndb object, the event buffer,

    and the NDB storage engine’s SUMA block. (Bug #18419907)

Bugs Fixed

  • Cluster API: When two tables had different foreign keys with
    the same name, ndb_restore considered this a name conflict and
    failed to restore the schema. As a result of this fix, a slash
    character (/) is now expressly disallowed in foreign key
    names, and the naming format parent_id/child_id/fk_name is now
    enforced by the NDB API. (Bug #18824753)
  • Processing a NODE_FAILREP signal that contained an invalid
    node ID could cause a data node to fail. (Bug #18993037, Bug
    #73015)
    References: This bug is a regression of Bug #16007980.
  • When building out of source, some files were written to the
    source directory instead of the build dir. These included the
    manifest.mf files used for creating ClusterJ jars and the
    pom.xml file used by mvn_install_ndbjtie.sh. In addition,
    ndbinfo.sql was written to the build directory, but marked as
    output to the source directory in CMakeLists.txt. (Bug
    #18889568, Bug #72843)
  • Adding a foreign key failed with NDB Error 208 if the parent
    index was parent table’s primary key, the primary key was not
    on the table’s initial attributes, and the child table was not
    empty. (Bug #18825966)
  • When an NDB table served as both the parent table and a child
    table for 2 different foreign keys having the same name,
    dropping the foreign key on the child table could cause the
    foreign key on the parent table to be dropped instead, leading
    to a situation in which it was impossible to drop the
    remaining foreign key. This situation can be modelled using
    the following CREATE TABLE statements:
    CREATE TABLE parent (<br> id INT NOT NULL,<br> PRIMARY KEY (id)<br> ) ENGINE=NDB;<br> CREATE TABLE child (<br> id INT NOT NULL,<br> parent_id INT,<br> PRIMARY KEY (id),<br> INDEX par_ind (parent_id),<br> FOREIGN KEY (parent_id)<br> REFERENCES parent(id)<br> ) ENGINE=NDB;<br> CREATE TABLE grandchild (<br> id INT,<br> parent_id INT,<br> INDEX par_ind (parent_id),<br> FOREIGN KEY (parent_id)<br> REFERENCES child(id)<br> ) ENGINE=NDB;<br>
    With the tables created as just shown, the issue occured when
    executing the statement ALTER TABLE child DROP FOREIGN KEY
    parent_id, because it was possible in some cases for NDB to
    drop the foreign key from the grandchild table instead. When
    this happened, any subsequent attempt to drop the foreign key
    from either the child or from the grandchild table failed.
    (Bug #18662582)
  • ndbmtd supports multiple parallel receiver threads, each of
    which performs signal reception for a subset of the remote
    node connections (transporters) with the mapping of
    remote_nodes to receiver threads decided at node startup.
    Connection control is managed by the multi-instance TRPMAN
    block, which is organized as a proxy and workers, and each
    receiver thread has a TRPMAN worker running locally.
    The QMGR block sends signals to TRPMAN to enable and disable
    communications with remote nodes. These signals are sent to
    the TRPMAN proxy, which forwards them to the workers. The
    workers themselves decide whether to act on signals, based on
    the set of remote nodes they manage.
    The current isuue arises because the mechanism used by the
    TRPMAN workers for determining which connections they are
    responsible for was implemented in such a way that each worker
    thought it was responsible for all connections. This resulted
    in the TRPMAN actions for OPEN_COMORD, ENABLE_COMREQ, and
    CLOSE_COMREQ being processed multiple times.
    The fix keeps TRPMAN instances (receiver threads) executing
    OPEN_COMORD, ENABLE_COMREQ and CLOSE_COMREQ requests. In
    addition, the correct TRPMAN instance is now chosen when
    routing from this instance for a specific remote connection.
    (Bug #18518037)
  • Executing ALTER TABLE … REORGANIZE PARTITION after
    increasing the number of data nodes in the cluster from 4 to
    16 led to a crash of the data nodes. This issue was shown to
    be a regression caused by previous fix which added a new dump
    handler using a dump code that was already in use (7019),
    which caused the command to execute two different handlers
    with different semantics. The new handler was assigned a new
    DUMP code (7024). (Bug #18550318)
    References: This bug is a regression of Bug #14220269.
  • When running with a very slow main thread, and one or more
    transaction coordinator threads, on different CPUs, it was
    possible to encounter a timeout when sending a
    DIH_SCAN_GET_NODESREQ signal, which could lead to a crash of
    the data node. Now in such cases the timeout is avoided. (Bug
    #18449222)
  • During data node failure handling, the transaction coordinator
    performing takeover gathers all known state information for
    any failed TC instance transactions, determines whether each
    transaction has been committed or aborted, and informs any
    involved API nodes so that they can report this accurately to
    their clients. The TC instance provides this information by
    sending TCKEY_FAILREF or TCKEY_FAILCONF signals to the API
    nodes as appropriate top each affected transaction.
    In the event that this TC instance does not have a direct
    connection to the API node, it attempts to deliver the signal
    by routing it through another data node in the same node group
    as the failing TC, and sends a GSN_TCKEY_FAILREFCONF_R signal
    to TC block instance 0 in that data node. A problem arose in
    the case of multiple transaction cooridnators, when this TC
    instance did not have a signal handler for such signals, which
    led it to fail.
    This issue has been corrected by adding a handler to the TC
    proxy block which in such cases forwards the signal to one of
    the local TC worker instances, which in turn attempts to
    forward the signal on to the API node. (Bug #18455971)
  • A local checkpoint (LCP) is tracked using a global LCP state
    (c_lcpState), and each NDB table has a status indicator which
    indicates the LCP status of that table (tabLcpStatus). If the
    global LCP state is LCP_STATUS_IDLE, then all the tables
    should have an LCP status of TLS_COMPLETED.
    When an LCP starts, the global LCP status is LCP_INIT_TABLES
    and the thread starts setting all the NDB tables to
    TLS_ACTIVE. If any tables are not ready for LCP, the LCP
    initialization procedure continues with CONTINUEB signals
    until all tables have become available and been marked
    TLS_ACTIVE. When this initialization is complete, the global
    LCP status is set to LCP_STATUS_ACTIVE.
    This bug occurred when the following conditions were met:
    • An LCP was in the LCP_INIT_TABLES state, and some but not
      all tables had been set to TLS_ACTIVE.
    • The master node failed before the global LCP state
      changed to LCP_STATUS_ACTIVE; that is, before the LCP
      could finish processing all tables.
    • The NODE_FAILREP signal resulting from the node failure
      was processed before the final CONTINUEB signal from the
      LCP initialization process, so that the node failure was
      processed while the LCP remained in the LCP_INIT_TABLES
      state.
      Following master node failure and selection of a new one, the
      new master queries the remaining nodes with a MASTER_LCPREQ
      signal to determine the state of the LCP. At this point, since
      the LCP status was LCP_INIT_TABLES, the LCP status was reset
      to LCP_STATUS_IDLE. However, the LCP status of the tables was
      not modified, so there remained tables with TLS_ACTIVE.
      Afterwards, the failed node is removed from the LCP. If the
      LCP status of a given table is TLS_ACTIVE, there is a check
      that the global LCP status is not LCP_STATUS_IDLE; this check
      failed and caused the data node to fail.
      Now the MASTER_LCPREQ handler ensures that the tabLcpStatus
      for all tables is updated to TLS_COMPLETED when the global LCP
      status is changed to LCP_STATUS_IDLE. (Bug #18044717)
  • When performing a copying ALTER TABLE operation, mysqld
    creates a new copy of the table to be altered. This
    intermediate table, which is given a name bearing the prefix
    #sql-, has an updated schema but contains no data. mysqld then
    copies the data from the original table to this intermediate
    table, drops the original table, and finally renames the
    intermediate table with the name of the original table.
    mysqld regards such a table as a temporary table and does not
    include it in the output from SHOW TABLES; mysqldump also
    ignores an intermediate table. However, NDB sees no difference
    between such an intermediate table and any other table. This
    difference in how intermediate tables are viewed by mysqld
    (and MySQL client programs) and by the NDB storage engine can
    give rise to problems when performing a backup and restore if
    an intermediate table existed in NDB, possibly left over from
    a failed ALTER TABLE that used copying. If a schema backup is
    performed using mysqldump and the mysql client, this table is
    not included. However, in the case where a data backup was
    done using the ndb_mgm client’s BACKUP command, the
    intermediate table was included, and was also included by
    ndb_restore, which then failed due to attempting to load data
    into a table which was not defined in the backed up schema.
    To prevent such failures from occurring, ndb_restore now by
    default ignores intermediate tables created during ALTER TABLE
    operations (that is, tables whose names begin with the prefix
    #sql-). A new option –exclude-intermediate-sql-tables is
    added that makes it possible to override the new behavior. The
    option’s default value is TRUE; to cause ndb_restore to revert
    to the old behavior and to attempt to restore intermediate
    tables, set this option to FALSE. (Bug #17882305)
  • The logging of insert failures has been improved. This is
    intended to help diagnose occasional issues seen when writing
    to the mysql.ndb_binlog_index table. (Bug #17461625)
  • The DEFINER column in the INFORMATION_SCHEMA.VIEWS table
    contained erroneous values for views contained in the ndbinfo
    information database. This could be seen in the result of a
    query such as SELECT TABLE_NAME, DEFINER FROM
    INFORMATION_SCHEMA.VIEWS WHERE TABLE_SCHEMA=’ndbinfo’. (Bug
    #17018500)
  • Employing a CHAR column that used the UTF8 character set as a
    table’s primary key column led to node failure when restarting
    data nodes. Attempting to restore a table with such a primary
    key also caused ndb_restore to fail. (Bug #16895311, Bug
    #68893)
  • Disk Data: Setting the undo buffer size used by
    InitialLogFileGroup to a value greater than that set by
    SharedGlobalMemory prevented data nodes from starting; the
    data nodes failed with Error 1504 Out of logbuffer memory.
    While the failure itself is expected behavior, the error
    message did not provide sufficient information to diagnose the
    actual source of the problem; now in such cases, a more
    specific error message Out of logbuffer memory (specify
    smaller undo_buffer_size or increase SharedGlobalMemory) is
    supplied. (Bug #11762867, Bug #55515)
  • Cluster Replication: When using NDB$EPOCH_TRANS, conflicts
    between DELETE operations were handled like conflicts between
    updates, with the primary rejecting the transaction and
    dependents, and realigning the secondary. This meant that
    their behavior with regard to subsequent operations on any
    affected row or rows depended on whether they were in the same
    epoch or a different one: within the same epoch, they were
    considered conflicting events; in different epochs, they were
    not considered in conflict.
    This fix brings the handling of conflicts between deletes by
    NDB$EPOCH_TRANS with that performed when using NDB$EPOCH for
    conflict detection and resolution, and extends testing with
    NDB$EPOCH and NDB$EPOCH_TRANS to include “delete-delete”
    conflicts, and encapsulate the expected result, with
    transactional conflict handling modified so that a conflict
    between DELETE operations alone is not sufficient to cause a
    transaction to be considered in conflict. (Bug #18459944)
  • Cluster API: When an NDB data node indicates a buffer overflow
    via an empty epoch, the event buffer places an inconsistent
    data event in the event queue. When this was consumed, it was
    not removed from the event queue as expected, causing
    subsequent nextEvent() calls to return 0. This caused event
    consumption to stall because the inconsistency remained
    flagged forever, while event data accumulated in the queue.
    Event data belonging to an empty inconsistent epoch can be
    found either at the beginning or somewhere in the middle.
    pollEvents() returns 0 for the first case. This fix handles
    the second case: calling nextEvent() call dequeues the
    inconsistent event before it returns. In order to benefit from
    this fix, user applications must call nextEvent() even when
    pollEvents() returns 0. (Bug #18716991)
  • Cluster API: The pollEvents() method returned 1, even when
    called with a wait time equal to 0, and there were no events
    waiting in the queue. Now in such cases it returns 0 as
    expected. (Bug #18703871)
  • 声明
    本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
    如何使用Alter Table语句在MySQL中更改表?如何使用Alter Table语句在MySQL中更改表?Mar 19, 2025 pm 03:51 PM

    本文讨论了使用MySQL的Alter Table语句修改表,包括添加/删除列,重命名表/列以及更改列数据类型。

    如何为MySQL连接配置SSL/TLS加密?如何为MySQL连接配置SSL/TLS加密?Mar 18, 2025 pm 12:01 PM

    文章讨论了为MySQL配置SSL/TLS加密,包括证书生成和验证。主要问题是使用自签名证书的安全含义。[角色计数:159]

    您如何处理MySQL中的大型数据集?您如何处理MySQL中的大型数据集?Mar 21, 2025 pm 12:15 PM

    文章讨论了处理MySQL中大型数据集的策略,包括分区,碎片,索引和查询优化。

    哪些流行的MySQL GUI工具(例如MySQL Workbench,PhpMyAdmin)是什么?哪些流行的MySQL GUI工具(例如MySQL Workbench,PhpMyAdmin)是什么?Mar 21, 2025 pm 06:28 PM

    文章讨论了流行的MySQL GUI工具,例如MySQL Workbench和PhpMyAdmin,比较了它们对初学者和高级用户的功能和适合性。[159个字符]

    如何使用Drop Table语句将表放入MySQL中?如何使用Drop Table语句将表放入MySQL中?Mar 19, 2025 pm 03:52 PM

    本文讨论了使用Drop Table语句在MySQL中放下表,并强调了预防措施和风险。它强调,没有备份,该动作是不可逆转的,详细介绍了恢复方法和潜在的生产环境危害。

    您如何用外国钥匙代表关系?您如何用外国钥匙代表关系?Mar 19, 2025 pm 03:48 PM

    文章讨论了使用外国密钥来代表数据库中的关系,重点是最佳实践,数据完整性和避免的常见陷阱。

    如何在JSON列上创建索引?如何在JSON列上创建索引?Mar 21, 2025 pm 12:13 PM

    本文讨论了在PostgreSQL,MySQL和MongoDB等各个数据库中的JSON列上创建索引,以增强查询性能。它解释了索引特定的JSON路径的语法和好处,并列出了支持的数据库系统。

    如何保护MySQL免受常见漏洞(SQL注入,蛮力攻击)?如何保护MySQL免受常见漏洞(SQL注入,蛮力攻击)?Mar 18, 2025 pm 12:00 PM

    文章讨论了使用准备好的语句,输入验证和强密码策略确保针对SQL注入和蛮力攻击的MySQL。(159个字符)

    See all articles

    热AI工具

    Undresser.AI Undress

    Undresser.AI Undress

    人工智能驱动的应用程序,用于创建逼真的裸体照片

    AI Clothes Remover

    AI Clothes Remover

    用于从照片中去除衣服的在线人工智能工具。

    Undress AI Tool

    Undress AI Tool

    免费脱衣服图片

    Clothoff.io

    Clothoff.io

    AI脱衣机

    AI Hentai Generator

    AI Hentai Generator

    免费生成ai无尽的。

    热门文章

    R.E.P.O.能量晶体解释及其做什么(黄色晶体)
    3 周前By尊渡假赌尊渡假赌尊渡假赌
    R.E.P.O.最佳图形设置
    3 周前By尊渡假赌尊渡假赌尊渡假赌
    R.E.P.O.如果您听不到任何人,如何修复音频
    3 周前By尊渡假赌尊渡假赌尊渡假赌

    热工具

    安全考试浏览器

    安全考试浏览器

    Safe Exam Browser是一个安全的浏览器环境,用于安全地进行在线考试。该软件将任何计算机变成一个安全的工作站。它控制对任何实用工具的访问,并防止学生使用未经授权的资源。

    ZendStudio 13.5.1 Mac

    ZendStudio 13.5.1 Mac

    功能强大的PHP集成开发环境

    MinGW - 适用于 Windows 的极简 GNU

    MinGW - 适用于 Windows 的极简 GNU

    这个项目正在迁移到osdn.net/projects/mingw的过程中,你可以继续在那里关注我们。MinGW:GNU编译器集合(GCC)的本地Windows移植版本,可自由分发的导入库和用于构建本地Windows应用程序的头文件;包括对MSVC运行时的扩展,以支持C99功能。MinGW的所有软件都可以在64位Windows平台上运行。

    SublimeText3汉化版

    SublimeText3汉化版

    中文版,非常好用

    EditPlus 中文破解版

    EditPlus 中文破解版

    体积小,语法高亮,不支持代码提示功能