The binary and source versions of MySQL Cluster 7.3.6 have now been made available at http://www.mysql.com/downloads/cluster/ .
Release notes
MySQL Cluster NDB 7.3.6 is a new release of MySQL Cluster, based
on MySQL Server 5.6 and including features from version 7.3 of the
NDB storage engine, as well as fixing a number of recently
discovered bugs in previous MySQL Cluster releases.
Obtaining MySQL Cluster NDB 7.3. MySQL Cluster NDB 7.3 source
code and binaries can be obtained from
http://dev.mysql.com/downloads/cluster/ .
For an overview of changes made in MySQL Cluster NDB 7.3, see
MySQL Cluster Development in MySQL Cluster NDB 7.3
( http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-development-5-6-ndb-7-3.html ).
This release also incorporates all bugfixes and changes made in
previous MySQL Cluster releases, as well as all bugfixes and
feature changes which were added in mainline MySQL 5.6 through
MySQL 5.6.19 (see Changes in MySQL 5.6.19 (2014-05-30)
( http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-19.html )).
Functionality Added or Changed
-
Cluster API: Added as an aid to debugging the ability to
specify a human-readable name for a given Ndb object and later
to retrieve it. These operations are implemented,
respectively, as the setNdbObjectName() and getNdbObjectName()
methods.
To make tracing of event handling between a user application
and NDB easier, you can use the reference (from getReference()
followed by the name (if provided) in printouts; the reference
ties together the application Ndb object, the event buffer,
and the NDB storage engine’s SUMA block. (Bug #18419907)
Bugs Fixed
- Cluster API: When two tables had different foreign keys with
the same name, ndb_restore considered this a name conflict and
failed to restore the schema. As a result of this fix, a slash
character (/) is now expressly disallowed in foreign key
names, and the naming format parent_id/child_id/fk_name is now
enforced by the NDB API. (Bug #18824753) - Processing a NODE_FAILREP signal that contained an invalid
node ID could cause a data node to fail. (Bug #18993037, Bug
#73015)
References: This bug is a regression of Bug #16007980. - When building out of source, some files were written to the
source directory instead of the build dir. These included the
manifest.mf files used for creating ClusterJ jars and the
pom.xml file used by mvn_install_ndbjtie.sh. In addition,
ndbinfo.sql was written to the build directory, but marked as
output to the source directory in CMakeLists.txt. (Bug
#18889568, Bug #72843) - Adding a foreign key failed with NDB Error 208 if the parent
index was parent table’s primary key, the primary key was not
on the table’s initial attributes, and the child table was not
empty. (Bug #18825966) - When an NDB table served as both the parent table and a child
table for 2 different foreign keys having the same name,
dropping the foreign key on the child table could cause the
foreign key on the parent table to be dropped instead, leading
to a situation in which it was impossible to drop the
remaining foreign key. This situation can be modelled using
the following CREATE TABLE statements:
CREATE TABLE parent (<br> id INT NOT NULL,<br> PRIMARY KEY (id)<br> ) ENGINE=NDB;<br> CREATE TABLE child (<br> id INT NOT NULL,<br> parent_id INT,<br> PRIMARY KEY (id),<br> INDEX par_ind (parent_id),<br> FOREIGN KEY (parent_id)<br> REFERENCES parent(id)<br> ) ENGINE=NDB;<br> CREATE TABLE grandchild (<br> id INT,<br> parent_id INT,<br> INDEX par_ind (parent_id),<br> FOREIGN KEY (parent_id)<br> REFERENCES child(id)<br> ) ENGINE=NDB;<br>
With the tables created as just shown, the issue occured when
executing the statement ALTER TABLE child DROP FOREIGN KEY
parent_id, because it was possible in some cases for NDB to
drop the foreign key from the grandchild table instead. When
this happened, any subsequent attempt to drop the foreign key
from either the child or from the grandchild table failed.
(Bug #18662582) - ndbmtd supports multiple parallel receiver threads, each of
which performs signal reception for a subset of the remote
node connections (transporters) with the mapping of
remote_nodes to receiver threads decided at node startup.
Connection control is managed by the multi-instance TRPMAN
block, which is organized as a proxy and workers, and each
receiver thread has a TRPMAN worker running locally.
The QMGR block sends signals to TRPMAN to enable and disable
communications with remote nodes. These signals are sent to
the TRPMAN proxy, which forwards them to the workers. The
workers themselves decide whether to act on signals, based on
the set of remote nodes they manage.
The current isuue arises because the mechanism used by the
TRPMAN workers for determining which connections they are
responsible for was implemented in such a way that each worker
thought it was responsible for all connections. This resulted
in the TRPMAN actions for OPEN_COMORD, ENABLE_COMREQ, and
CLOSE_COMREQ being processed multiple times.
The fix keeps TRPMAN instances (receiver threads) executing
OPEN_COMORD, ENABLE_COMREQ and CLOSE_COMREQ requests. In
addition, the correct TRPMAN instance is now chosen when
routing from this instance for a specific remote connection.
(Bug #18518037) - Executing ALTER TABLE … REORGANIZE PARTITION after
increasing the number of data nodes in the cluster from 4 to
16 led to a crash of the data nodes. This issue was shown to
be a regression caused by previous fix which added a new dump
handler using a dump code that was already in use (7019),
which caused the command to execute two different handlers
with different semantics. The new handler was assigned a new
DUMP code (7024). (Bug #18550318)
References: This bug is a regression of Bug #14220269. - When running with a very slow main thread, and one or more
transaction coordinator threads, on different CPUs, it was
possible to encounter a timeout when sending a
DIH_SCAN_GET_NODESREQ signal, which could lead to a crash of
the data node. Now in such cases the timeout is avoided. (Bug
#18449222) - During data node failure handling, the transaction coordinator
performing takeover gathers all known state information for
any failed TC instance transactions, determines whether each
transaction has been committed or aborted, and informs any
involved API nodes so that they can report this accurately to
their clients. The TC instance provides this information by
sending TCKEY_FAILREF or TCKEY_FAILCONF signals to the API
nodes as appropriate top each affected transaction.
In the event that this TC instance does not have a direct
connection to the API node, it attempts to deliver the signal
by routing it through another data node in the same node group
as the failing TC, and sends a GSN_TCKEY_FAILREFCONF_R signal
to TC block instance 0 in that data node. A problem arose in
the case of multiple transaction cooridnators, when this TC
instance did not have a signal handler for such signals, which
led it to fail.
This issue has been corrected by adding a handler to the TC
proxy block which in such cases forwards the signal to one of
the local TC worker instances, which in turn attempts to
forward the signal on to the API node. (Bug #18455971) - A local checkpoint (LCP) is tracked using a global LCP state
(c_lcpState), and each NDB table has a status indicator which
indicates the LCP status of that table (tabLcpStatus). If the
global LCP state is LCP_STATUS_IDLE, then all the tables
should have an LCP status of TLS_COMPLETED.
When an LCP starts, the global LCP status is LCP_INIT_TABLES
and the thread starts setting all the NDB tables to
TLS_ACTIVE. If any tables are not ready for LCP, the LCP
initialization procedure continues with CONTINUEB signals
until all tables have become available and been marked
TLS_ACTIVE. When this initialization is complete, the global
LCP status is set to LCP_STATUS_ACTIVE.
This bug occurred when the following conditions were met:- An LCP was in the LCP_INIT_TABLES state, and some but not
all tables had been set to TLS_ACTIVE. - The master node failed before the global LCP state
changed to LCP_STATUS_ACTIVE; that is, before the LCP
could finish processing all tables. - The NODE_FAILREP signal resulting from the node failure
was processed before the final CONTINUEB signal from the
LCP initialization process, so that the node failure was
processed while the LCP remained in the LCP_INIT_TABLES
state.
Following master node failure and selection of a new one, the
new master queries the remaining nodes with a MASTER_LCPREQ
signal to determine the state of the LCP. At this point, since
the LCP status was LCP_INIT_TABLES, the LCP status was reset
to LCP_STATUS_IDLE. However, the LCP status of the tables was
not modified, so there remained tables with TLS_ACTIVE.
Afterwards, the failed node is removed from the LCP. If the
LCP status of a given table is TLS_ACTIVE, there is a check
that the global LCP status is not LCP_STATUS_IDLE; this check
failed and caused the data node to fail.
Now the MASTER_LCPREQ handler ensures that the tabLcpStatus
for all tables is updated to TLS_COMPLETED when the global LCP
status is changed to LCP_STATUS_IDLE. (Bug #18044717)
- An LCP was in the LCP_INIT_TABLES state, and some but not
creates a new copy of the table to be altered. This
intermediate table, which is given a name bearing the prefix
#sql-, has an updated schema but contains no data. mysqld then
copies the data from the original table to this intermediate
table, drops the original table, and finally renames the
intermediate table with the name of the original table.
mysqld regards such a table as a temporary table and does not
include it in the output from SHOW TABLES; mysqldump also
ignores an intermediate table. However, NDB sees no difference
between such an intermediate table and any other table. This
difference in how intermediate tables are viewed by mysqld
(and MySQL client programs) and by the NDB storage engine can
give rise to problems when performing a backup and restore if
an intermediate table existed in NDB, possibly left over from
a failed ALTER TABLE that used copying. If a schema backup is
performed using mysqldump and the mysql client, this table is
not included. However, in the case where a data backup was
done using the ndb_mgm client’s BACKUP command, the
intermediate table was included, and was also included by
ndb_restore, which then failed due to attempting to load data
into a table which was not defined in the backed up schema.
To prevent such failures from occurring, ndb_restore now by
default ignores intermediate tables created during ALTER TABLE
operations (that is, tables whose names begin with the prefix
#sql-). A new option –exclude-intermediate-sql-tables is
added that makes it possible to override the new behavior. The
option’s default value is TRUE; to cause ndb_restore to revert
to the old behavior and to attempt to restore intermediate
tables, set this option to FALSE. (Bug #17882305)
intended to help diagnose occasional issues seen when writing
to the mysql.ndb_binlog_index table. (Bug #17461625)
contained erroneous values for views contained in the ndbinfo
information database. This could be seen in the result of a
query such as SELECT TABLE_NAME, DEFINER FROM
INFORMATION_SCHEMA.VIEWS WHERE TABLE_SCHEMA=’ndbinfo’. (Bug
#17018500)
table’s primary key column led to node failure when restarting
data nodes. Attempting to restore a table with such a primary
key also caused ndb_restore to fail. (Bug #16895311, Bug
#68893)
InitialLogFileGroup to a value greater than that set by
SharedGlobalMemory prevented data nodes from starting; the
data nodes failed with Error 1504 Out of logbuffer memory.
While the failure itself is expected behavior, the error
message did not provide sufficient information to diagnose the
actual source of the problem; now in such cases, a more
specific error message Out of logbuffer memory (specify
smaller undo_buffer_size or increase SharedGlobalMemory) is
supplied. (Bug #11762867, Bug #55515)
between DELETE operations were handled like conflicts between
updates, with the primary rejecting the transaction and
dependents, and realigning the secondary. This meant that
their behavior with regard to subsequent operations on any
affected row or rows depended on whether they were in the same
epoch or a different one: within the same epoch, they were
considered conflicting events; in different epochs, they were
not considered in conflict.
This fix brings the handling of conflicts between deletes by
NDB$EPOCH_TRANS with that performed when using NDB$EPOCH for
conflict detection and resolution, and extends testing with
NDB$EPOCH and NDB$EPOCH_TRANS to include “delete-delete”
conflicts, and encapsulate the expected result, with
transactional conflict handling modified so that a conflict
between DELETE operations alone is not sufficient to cause a
transaction to be considered in conflict. (Bug #18459944)
via an empty epoch, the event buffer places an inconsistent
data event in the event queue. When this was consumed, it was
not removed from the event queue as expected, causing
subsequent nextEvent() calls to return 0. This caused event
consumption to stall because the inconsistency remained
flagged forever, while event data accumulated in the queue.
Event data belonging to an empty inconsistent epoch can be
found either at the beginning or somewhere in the middle.
pollEvents() returns 0 for the first case. This fix handles
the second case: calling nextEvent() call dequeues the
inconsistent event before it returns. In order to benefit from
this fix, user applications must call nextEvent() even when
pollEvents() returns 0. (Bug #18716991)
called with a wait time equal to 0, and there were no events
waiting in the queue. Now in such cases it returns 0 as
expected. (Bug #18703871)

데이터베이스 및 프로그래밍에서 MySQL의 위치는 매우 중요합니다. 다양한 응용 프로그램 시나리오에서 널리 사용되는 오픈 소스 관계형 데이터베이스 관리 시스템입니다. 1) MySQL은 웹, 모바일 및 엔터프라이즈 레벨 시스템을 지원하는 효율적인 데이터 저장, 조직 및 검색 기능을 제공합니다. 2) 클라이언트 서버 아키텍처를 사용하고 여러 스토리지 엔진 및 인덱스 최적화를 지원합니다. 3) 기본 사용에는 테이블 작성 및 데이터 삽입이 포함되며 고급 사용에는 다중 테이블 조인 및 복잡한 쿼리가 포함됩니다. 4) SQL 구문 오류 및 성능 문제와 같은 자주 묻는 질문은 설명 명령 및 느린 쿼리 로그를 통해 디버깅 할 수 있습니다. 5) 성능 최적화 방법에는 인덱스의 합리적인 사용, 최적화 된 쿼리 및 캐시 사용이 포함됩니다. 모범 사례에는 거래 사용 및 준비된 체계가 포함됩니다

MySQL은 소규모 및 대기업에 적합합니다. 1) 소기업은 고객 정보 저장과 같은 기본 데이터 관리에 MySQL을 사용할 수 있습니다. 2) 대기업은 MySQL을 사용하여 대규모 데이터 및 복잡한 비즈니스 로직을 처리하여 쿼리 성능 및 트랜잭션 처리를 최적화 할 수 있습니다.

InnoDB는 팬텀 읽기를 차세대 점화 메커니즘을 통해 효과적으로 방지합니다. 1) Next-Keylocking은 Row Lock과 Gap Lock을 결합하여 레코드와 간격을 잠그기 위해 새로운 레코드가 삽입되지 않도록합니다. 2) 실제 응용 분야에서 쿼리를 최적화하고 격리 수준을 조정함으로써 잠금 경쟁을 줄이고 동시성 성능을 향상시킬 수 있습니다.

MySQL은 프로그래밍 언어가 아니지만 쿼리 언어 SQL은 프로그래밍 언어의 특성을 가지고 있습니다. 1. SQL은 조건부 판단, 루프 및 가변 작업을 지원합니다. 2. 저장된 절차, 트리거 및 기능을 통해 사용자는 데이터베이스에서 복잡한 논리 작업을 수행 할 수 있습니다.

MySQL은 오픈 소스 관계형 데이터베이스 관리 시스템으로, 주로 데이터를 신속하고 안정적으로 저장하고 검색하는 데 사용됩니다. 작업 원칙에는 클라이언트 요청, 쿼리 해상도, 쿼리 실행 및 반환 결과가 포함됩니다. 사용의 예로는 테이블 작성, 데이터 삽입 및 쿼리 및 조인 작업과 같은 고급 기능이 포함됩니다. 일반적인 오류에는 SQL 구문, 데이터 유형 및 권한이 포함되며 최적화 제안에는 인덱스 사용, 최적화 된 쿼리 및 테이블 분할이 포함됩니다.

MySQL은 데이터 저장, 관리, 쿼리 및 보안에 적합한 오픈 소스 관계형 데이터베이스 관리 시스템입니다. 1. 다양한 운영 체제를 지원하며 웹 응용 프로그램 및 기타 필드에서 널리 사용됩니다. 2. 클라이언트-서버 아키텍처 및 다양한 스토리지 엔진을 통해 MySQL은 데이터를 효율적으로 처리합니다. 3. 기본 사용에는 데이터베이스 및 테이블 작성, 데이터 삽입, 쿼리 및 업데이트가 포함됩니다. 4. 고급 사용에는 복잡한 쿼리 및 저장 프로 시저가 포함됩니다. 5. 설명 진술을 통해 일반적인 오류를 디버깅 할 수 있습니다. 6. 성능 최적화에는 인덱스의 합리적인 사용 및 최적화 된 쿼리 문이 포함됩니다.

MySQL은 성능, 신뢰성, 사용 편의성 및 커뮤니티 지원을 위해 선택됩니다. 1.MYSQL은 효율적인 데이터 저장 및 검색 기능을 제공하여 여러 데이터 유형 및 고급 쿼리 작업을 지원합니다. 2. 고객-서버 아키텍처 및 다중 스토리지 엔진을 채택하여 트랜잭션 및 쿼리 최적화를 지원합니다. 3. 사용하기 쉽고 다양한 운영 체제 및 프로그래밍 언어를 지원합니다. 4. 강력한 지역 사회 지원을 받고 풍부한 자원과 솔루션을 제공합니다.

InnoDB의 잠금 장치에는 공유 잠금 장치, 독점 잠금, 의도 잠금 장치, 레코드 잠금, 갭 잠금 및 다음 키 잠금 장치가 포함됩니다. 1. 공유 잠금을 사용하면 다른 트랜잭션을 읽지 않고 트랜잭션이 데이터를 읽을 수 있습니다. 2. 독점 잠금은 다른 트랜잭션이 데이터를 읽고 수정하는 것을 방지합니다. 3. 의도 잠금은 잠금 효율을 최적화합니다. 4. 레코드 잠금 잠금 인덱스 레코드. 5. 갭 잠금 잠금 장치 색인 기록 간격. 6. 다음 키 잠금은 데이터 일관성을 보장하기 위해 레코드 잠금과 갭 잠금의 조합입니다.


핫 AI 도구

Undresser.AI Undress
사실적인 누드 사진을 만들기 위한 AI 기반 앱

AI Clothes Remover
사진에서 옷을 제거하는 온라인 AI 도구입니다.

Undress AI Tool
무료로 이미지를 벗다

Clothoff.io
AI 옷 제거제

AI Hentai Generator
AI Hentai를 무료로 생성하십시오.

인기 기사

뜨거운 도구

맨티스BT
Mantis는 제품 결함 추적을 돕기 위해 설계된 배포하기 쉬운 웹 기반 결함 추적 도구입니다. PHP, MySQL 및 웹 서버가 필요합니다. 데모 및 호스팅 서비스를 확인해 보세요.

메모장++7.3.1
사용하기 쉬운 무료 코드 편집기

MinGW - Windows용 미니멀리스트 GNU
이 프로젝트는 osdn.net/projects/mingw로 마이그레이션되는 중입니다. 계속해서 그곳에서 우리를 팔로우할 수 있습니다. MinGW: GCC(GNU Compiler Collection)의 기본 Windows 포트로, 기본 Windows 애플리케이션을 구축하기 위한 무료 배포 가능 가져오기 라이브러리 및 헤더 파일로 C99 기능을 지원하는 MSVC 런타임에 대한 확장이 포함되어 있습니다. 모든 MinGW 소프트웨어는 64비트 Windows 플랫폼에서 실행될 수 있습니다.

PhpStorm 맥 버전
최신(2018.2.1) 전문 PHP 통합 개발 도구

SublimeText3 중국어 버전
중국어 버전, 사용하기 매우 쉽습니다.
