5 common PHP database mistakes---Including database schema design, database access and business logic code that uses the database---and their solutions.
If there was only one right way to use a database...
There are many ways you can create database design, database access, and database-based PHP business logic code, but they usually end up with errors. This article explains five common problems that arise in database design and PHP code that accesses the database, and how to fix them when you encounter them.
Question 1: Use MySQL directly
A common problem is that older PHP code uses the mysql_ function directly to access the database. Listing 1 shows how to access the database directly.
List 1. access/get.php
<?php
function get_user_id( $name )
{
$db = mysql_connect(’localhost’, ’root’, ’passWord’);
mysql_select_db(‘users’);
$res = mysql_query( "SELECT id FROM users WHERE login=’".$name."'" );
while( $row = mysql_fetch_array( $res ) ) { $id = $row[0]; }
return $id;
}
var_dump(get_user_id(’jack’));
?>
Note that the mysql_connect function is used to access the database. Also note the query, which uses string concatenation to add the $name parameter to the query.
There are two good alternatives to this technology: the PEAR DB module and the PHP Data Objects (PDO) class. Both provide abstractions from specific database selections. As a result, your code can run on IBM® DB2®, MySQL, PostgreSQL, or any other database you want to connect to without much tweaking.
Another value of using the PEAR DB module and the PDO abstraction layer is that you can use the ? operator in SQL statements. Doing so makes SQL easier to maintain and protects your application from SQL injection attacks.
The alternative code using PEAR DB is shown below.
List 2. Access/get_good.php
<?php
require_once("DB.php");
function get_user_id( $name )
{
$dsn = ‘mysql://root:password@localhost/users’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
$res = $db->query( ’SELECT id FROM users WHERE login=?’,array( $name ) );
$id = null;
while( $res->fetchInto( $row ) ) { $id = $row[0]; }
return $id;
}
var_dump( get_user_id( ’jack’ ) );
?>
Note that all direct uses of MySQL have been eliminated, except for the database connection string in $dsn. Additionally, we use the $name variable in SQL via the ? operator. Then, the query data is sent in through the array at the end of the query() method.
Question 2: Not using the auto-increment function
Like most modern databases, MySQL has the ability to create auto-incrementing unique identifiers on a per-record basis. Beyond that, we'll still see code that first runs a SELECT statement to find the largest id, then increments that id by 1, and finds a new record. Listing 3 shows an example bad pattern.
List 3. Badid.sql
DROP TABLE IF EXISTS users;
CREATE TABLE users (
id MEDIUMINT,
login TEXT,
password TEXT
);
INSERT INTO users VALUES ( 1, ‘jack’, ‘pass’ );
INSERT INTO users VALUES ( 2, ‘joan’, ‘pass’ );
INSERT INTO users VALUES ( 1, ‘jane’, ‘pass’ );
The id field here is simply specified as an integer. So, although it should be unique, we can add any value, as shown in the several INSERT statements following the CREATE statement. Listing 4 shows the PHP code to add users to this type of schema.
Listing 4. Add_user.php
<?php
require_once("DB.php");
function add_user( $name, $pass )
{
$rows = array();
$dsn = ‘mysql://root:password@localhost/bad_badid’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
$res = $db->query( "SELECT max(id) FROM users" );
$id = null;
while( $res->fetchInto( $row ) ) { $id = $row[0]; }
$id += 1;
$sth = $db->PRepare( "INSERT INTO users VALUES(?,?,?)" );
$db->execute( $sth, array( $id, $name, $pass ) );
return $id;
}
$id = add_user(’jerry’, ‘pass’);
var_dump( $id );
?>
The code in add_user.php first performs a query to find the maximum value of id. The file then runs an INSERT statement with the id value increased by 1. This code will fail in a race condition on a heavily loaded server. Plus, it's also inefficient.
So what is the alternative? Use the auto-increment feature in MySQL to automatically create a unique ID for each insert. The updated schema looks like this.
List 5. Goodid.php
DROP TABLE IF EXISTS users;
CREATE TABLE users (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
Login TEXT NOT NULL,
Password TEXT NOT NULL,
PRIMARY KEY( id )
);
INSERT INTO users VALUES ( null, ‘jack’, ‘pass’ );
INSERT INTO users VALUES ( null, ‘joan’, ‘pass’ );
INSERT INTO users VALUES ( null, ‘jane’, ‘pass’ );
We added the NOT NULL flag to indicate that the field must not be empty. We also added the AUTO_INCREMENT flag to indicate that the field is auto-incrementing, and the PRIMARY KEY flag to indicate which field is an id. These changes speed things up. Listing 6 shows the updated PHP code to insert the user into the table.
Listing 6. Add_user_good.php
<?php
require_once("DB.php");
function add_user( $name, $pass )
{
$dsn = ‘mysql://root:password@localhost/good_genid’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
$sth = $db->prepare( "INSERT INTO users VALUES(null,?,?)" );
$db->execute( $sth, array( $name, $pass ) );
$res = $db->query( "SELECT last_insert_id()" );
$id = null;
while( $res->fetchInto( $row ) ) { $id = $row[0]; }
return $id;
}
$id = add_user(’jerry’, ‘pass’);
var_dump( $id );
?>
Now instead of getting the largest id value, I directly use the INSERT statement to insert the data, and then use the SELECT statement to retrieve the id of the last inserted record. The code is much simpler and more efficient than the original version and its associated patterns.
Question 3: Using multiple databases
Occasionally, we will see an application where each table is in a separate database. This is reasonable in very large databases, but for general applications this level of partitioning is not needed. Additionally, relational queries cannot be performed across databases, which takes away from the whole idea of using a relational database, not to mention that it would be more difficult to manage tables across multiple databases. So, what should multiple databases look like? First, you need some data. Listing 7 shows such data divided into 4 files.
Listing 7. Database file
Files.sql:
CREATE TABLE files (
id MEDIUMINT,
user_id MEDIUMINT,
name TEXT,
path TEXT
);
Load_files.sql:
INSERT INTO files VALUES ( 1, 1, ‘test1.jpg’, ‘files/test1.jpg’ );
INSERT INTO files VALUES ( 2, 1, ‘test2.jpg’, ‘files/test2.jpg’ );
Users.sql:
DROP TABLE IF EXISTS users;
CREATE TABLE users (
id MEDIUMINT,
Login TEXT,
Password TEXT
);
Load_users.sql:
INSERT INTO users VALUES ( 1, ‘jack’, ‘pass’ );
INSERT INTO users VALUES ( 2, ‘jon’, ‘pass’ );
In the multi-database version of these files, you should load the SQL statements into one database and then load the users SQL statements into another database. The PHP code used to query the database for files associated with a specific user is shown below.
Listing 8. Getfiles.php
<?php
require_once("DB.php");
function get_user( $name )
{
$dsn = ‘mysql://root:password@localhost/bad_multi1’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
$res = $db->query( "SELECT id FROM users WHERE login=?", array( $name ) );
$uid = null;
while( $res->fetchInto( $row ) ) { $uid = $row[0]; }
return $uid;
}
function get_files( $name )
{
$uid = get_user( $name );
$rows = array();
$dsn = ‘mysql://root:password@localhost/bad_multi2’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
$res = $db->query( "SELECT * FROM files WHERE user_id=?", array( $uid ) );
while( $res->fetchInto( $row ) ) { $rows[] = $row; }
return $rows;
}
$files = get_files(’jack’);
var_dump( $files );
?>
The get_user function connects to the database containing the users table and retrieves the ID of a given user. The get_files function connects to the files table and retrieves the file rows associated with a given user.
A better way to do all of these things is to load the data into a database and then execute a query, such as the following query.
Listing 9. Getfiles_good.php
<?php
require_once("DB.php");
function get_files( $name )
{
$rows = array();
$dsn = ‘mysql://root:password@localhost/good_multi’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
$res = $db->query("SELECT files.* FROM users, files WHERE
users.login=? AND users.id=files.user_id",
array( $name ) );
while( $res->fetchInto( $row ) ) { $rows[] = $row; }
return $rows;
}
$files = get_files(’jack’);
var_dump( $files );
?>
Not only is the code shorter, it's also easier to understand and more efficient. Instead of executing two queries, we execute one query.
Although this question may sound far-fetched, in practice we usually conclude that all tables should be in the same database unless there are very compelling reasons. Question 4: Not using relationships
Relational databases are different from programming languages in that they do not have array types. Instead, they use relationships between tables to create a one-to-many structure between objects, which has the same effect as an array. One problem I've seen in applications is where engineers try to use the database like a programming language, by creating arrays using text strings with comma-separated identifiers. See the pattern below.
List 10. Bad.sql
DROP TABLE IF EXISTS files;
CREATE TABLE files (
id MEDIUMINT,
name TEXT,
path TEXT
);
DROP TABLE IF EXISTS users;
CREATE TABLE users (
id MEDIUMINT,
Login TEXT,
Password TEXT,
files TEXT
);
INSERT INTO files VALUES ( 1, ‘test1.jpg’, ‘media/test1.jpg’ );
INSERT INTO files VALUES ( 2, ‘test1.jpg’, ‘media/test1.jpg’ );
INSERT INTO users VALUES ( 1, ‘jack’, ‘pass’, ‘1,2’ );
A user in the system can have multiple files. In programming languages, arrays should be used to represent files associated with a user. In this example, the programmer chooses to create a files field that contains a comma-separated list of file ids. To get a list of all files for a specific user, the programmer must first read the rows from the users table, then parse the text of the files and run a separate SELECT statement for each file. The code is shown below.
List 11. Get.php
<?php
require_once("DB.php");
function get_files( $name )
{
$dsn = ‘mysql://root:password@localhost/bad_norel’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
$res = $db->query( "SELECT files FROM users WHERE login=?", array( $name ) );
$files = null;
while( $res->fetchInto( $row ) ) { $files = $row[0]; }
$rows = array();
foreach( split( ’,’,$files ) as $file )
{
$res = $db->query( "SELECT * FROM files WHERE id=?",
array( $file ));
while( $res->fetchInto( $row ) ) { $rows[] = $row; }
}
return $rows;
}
$files = get_files(’jack’);
var_dump( $files );
?>
The technology is slow, difficult to maintain, and does not make good use of databases. The only solution is to re-architect the schema to convert it back to traditional relational form as shown below.
List 12. Good.sql
DROP TABLE IF EXISTS files;
CREATE TABLE files (
id MEDIUMINT,
user_id MEDIUMINT,
name TEXT,
path TEXT
);
DROP TABLE IF EXISTS users;
CREATE TABLE users (
id MEDIUMINT,
Login TEXT,
Password TEXT
);
INSERT INTO users VALUES ( 1, ‘jack’, ‘pass’ );
INSERT INTO files VALUES ( 1, 1, ‘test1.jpg’, ‘media/test1.jpg’ );
INSERT INTO files VALUES ( 2, 1, ‘test1.jpg’, ‘media/test1.jpg’ );
Here, each file is related to the user in the file table through the user_id function. This may go against the grain of anyone who thinks of multiple files as arrays. Of course, arrays do not reference the objects they contain—in fact, vice versa. But in a relational database, that's how it works, and queries are much faster and simpler because of it. Listing 13 shows the corresponding PHP code.
Listing 13. Get_good.php
<?php
require_once("DB.php");
function get_files( $name )
{
$dsn = ‘mysql://root:password@localhost/good_rel’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
$rows = array();
$res = $db->query("SELECT files.* FROM users,files WHERE users.login=?
AND users.id=files.user_id",array( $name ) );
while( $res->fetchInto( $row ) ) { $rows[] = $row; }
return $rows;
}
$files = get_files(’jack’);
var_dump( $files );
?>
Here, we make a query to the database to get all the rows. The code is not complex, and it uses the database as it was intended.
Question 5: n+1 mode
I can't tell you how many times I've seen large applications where the code first retrieves some entities (say, customers) and then goes back and forth to retrieve them one by one to get the details of each entity. We call this n+1 mode because the query is executed so many times - one query retrieves the list of all entities, and then one query is executed for each of the n entities. This is not a problem when n=10, but what about n=100 or n=1000? Then there are bound to be inefficiencies. Listing 14 shows an example of this pattern.
Listing 14. Schema.sql
DROP TABLE IF EXISTS authors;
CREATE TABLE authors (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name TEXT NOT NULL,
PRIMARY KEY ( id )
);
DROP TABLE IF EXISTS books;
CREATE TABLE books (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
author_id MEDIUMINT NOT NULL,
name TEXT NOT NULL,
PRIMARY KEY ( id )
);
INSERT INTO authors VALUES ( null, ‘Jack Herrington’ );
INSERT INTO authors VALUES ( null, ‘Dave Thomas’ );
INSERT INTO books VALUES ( null, 1, ‘Code Generation in Action’ );
INSERT INTO books VALUES ( null, 1, ‘Podcasting Hacks’ );
INSERT INTO books VALUES ( null, 1, ‘PHP Hacks’ );
INSERT INTO books VALUES ( null, 2, ‘Pragmatic Programmer’ );
INSERT INTO books VALUES ( null, 2, ‘Ruby on Rails’ );
INSERT INTO books VALUES ( null, 2, ‘Programming Ruby’ );
The model is reliable and there are no errors in it. The problem lies in the code that accesses the database to find all books by a given author, as shown below.
List 15. Get.php
<?php
require_once(’DB.php’);
$dsn = ‘mysql://root:password@localhost/good_books’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
function get_author_id( $name )
{
global $db;
$res = $db->query( "SELECT id FROM authors WHERE name=?", array( $name ) );
$id = null;
while( $res->fetchInto( $row ) ) { $id = $row[0]; }
return $id;
}
function get_books( $id )
{
global $db;
$res = $db->query( "SELECT id FROM books WHERE author_id=?", array( $id ) );
$ids = array();
while( $res->fetchInto( $row ) ) { $ids []= $row[0]; }
return $ids;
}
function get_book( $id )
{
global $db;
$res = $db->query( "SELECT * FROM books WHERE id=?", array( $id ));
while( $res->fetchInto( $row ) ) { return $row; }
return null;
}
$author_id = get_author_id(’Jack Herrington’);
$books = get_books( $author_id );
foreach( $books as $book_id ) {
$book = get_book( $book_id );
var_dump( $book );
}
?>
If you look at the code below, you might be thinking, "Hey, this is really clear and simple." First, you get the author id, then you get the list of books, and then you get information about each book. Yes, it's clear and simple, but is it efficient? The answer is no. See how many queries were executed just to retrieve Jack Herrington's books. Once to get the id, another time to get the list of books, then perform a query per book. Three books require five queries!
The solution is to use a function to perform a large number of queries, as shown below.
Listing 16. Get_good.php
<?php
require_once(’DB.php’);
$dsn = ‘mysql://root:password@localhost/good_books’;
$db =& DB::Connect( $dsn, array() );
if (PEAR::isError($db)) { die($db->getMessage()); }
function get_books( $name )
{
global $db;
$res = $db->query("SELECT books.* FROM authors,books WHERE books.author_id=authors.id AND authors.name=?",
array( $name ) );
$rows = array();
while( $res->fetchInto( $row ) ) { $rows []= $row; }
return $rows;
}
$books = get_books(’Jack Herrington’);
var_dump( $books );
?>
Retrieving the list now requires a quick, single query. This means I will most likely have to have several methods of these types with different parameters, but there's really no choice. If you want to have a scalable PHP application, then efficient use of the database is a must, which means smarter queries.
The problem with this example is that it's a little too clear. Generally speaking, these types of n+1 or n*n problems are much more subtle. And they only appear when the database administrator runs Query Profiler on the system when the system has performance issues.

技嘉的主板怎么设置键盘开机首先,要支持键盘开机,一定是PS2键盘!!设置步骤如下:第一步:开机按Del或者F2进入bios,到bios的Advanced(高级)模式普通主板默认进入主板的EZ(简易)模式,需要按F7切换到高级模式,ROG系列主板默认进入bios的高级模式(我们用简体中文来示范)第二步:选择到——【高级】——【高级电源管理(APM)】第三步:找到选项【由PS2键盘唤醒】第四步:这个选项默认是Disabled(关闭)的,下拉之后可以看到三种不同的设置选择,分别是按【空格键】开机、按组

php把负数转为正整数的方法:1、使用abs()函数将负数转为正数,使用intval()函数对正数取整,转为正整数,语法“intval(abs($number))”;2、利用“~”位运算符将负数取反加一,语法“~$number + 1”。

1.处理器在选择电脑配置时,处理器是至关重要的组件之一。对于玩CS这样的游戏来说,处理器的性能直接影响游戏的流畅度和反应速度。推荐选择IntelCorei5或i7系列的处理器,因为它们具有强大的多核处理能力和高频率,可以轻松应对CS的高要求。2.显卡显卡是游戏性能的重要因素之一。对于射击游戏如CS而言,显卡的性能直接影响游戏画面的清晰度和流畅度。建议选择NVIDIAGeForceGTX系列或AMDRadeonRX系列的显卡,它们具备出色的图形处理能力和高帧率输出,能够提供更好的游戏体验3.内存电

广联达软件是一家专注于建筑信息化领域的软件公司,其产品被广泛应用于建筑设计、施工、运营等各个环节。由于广联达软件功能复杂、数据量大,对电脑的配置要求较高。本文将从多个方面详细阐述广联达软件的电脑配置推荐,以帮助读者选择适合的电脑配置处理器广联达软件在进行建筑设计、模拟等操作时,需要进行大量的数据计算和处理,因此对处理器的要求较高。推荐选择多核心、高主频的处理器,如英特尔i7系列或AMDRyzen系列。这些处理器具有较强的计算能力和多线程处理能力,能够更好地满足广联达软件的需求。内存内存是影响计算

主板上SPDIFOUT连接线序最近我遇到了一个问题,就是关于电线的接线顺序。我上网查了一下,有些资料说1、2、4对应的是out、+5V、接地;而另一些资料则说1、2、4对应的是out、接地、+5V。最好的办法是查看你的主板说明书,如果找不到说明书,你可以使用万用表进行测量。首先找到接地,然后就可以确定其他的接线顺序了。主板vdg怎么接线连接主板的VDG接线时,您需要将VGA连接线的一端插入显示器的VGA接口,另一端插入电脑的显卡VGA接口。请注意,不要将其插入主板的VGA接口。完成连接后,您可以

php除以100保留两位小数的方法:1、利用“/”运算符进行除法运算,语法“数值 / 100”;2、使用“number_format(除法结果, 2)”或“sprintf("%.2f",除法结果)”语句进行四舍五入的处理值,并保留两位小数。

判断方法:1、使用“strtotime("年-月-日")”语句将给定的年月日转换为时间戳格式;2、用“date("z",时间戳)+1”语句计算指定时间戳是一年的第几天。date()返回的天数是从0开始计算的,因此真实天数需要在此基础上加1。

php判断有没有小数点的方法:1、使用“strpos(数字字符串,'.')”语法,如果返回小数点在字符串中第一次出现的位置,则有小数点;2、使用“strrpos(数字字符串,'.')”语句,如果返回小数点在字符串中最后一次出现的位置,则有。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Dreamweaver Mac version
Visual web development tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Zend Studio 13.0.1
Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 English version
Recommended: Win version, supports code prompts!
