>데이터 베이스 >MySQL 튜토리얼 >【OpenCV2.4】SVM处理线性不可分的例子

【OpenCV2.4】SVM处理线性不可分的例子

WBOY
WBOY원래의
2016-06-07 15:43:001224검색

【原文:http://www.cnblogs.com/justany/archive/2012/11/26/2788509.html】 目的 实际事物模型中,并非所有东西都是线性可分的。 需要寻找一种方法对线性不可分数据进行划分。 原理 ,我们推导出对于线性可分数据,最佳划分超平面应满足: 现在我们想引入

【原文:http://www.cnblogs.com/justany/archive/2012/11/26/2788509.html】

目的

  • 实际事物模型中,并非所有东西都是线性可分的。
  • 需要寻找一种方法对线性不可分数据进行划分。

原理

,我们推导出对于线性可分数据,最佳划分超平面应满足:

    【OpenCV2.4】SVM处理线性不可分的例子

现在我们想引入一些东西,来表示那些被错分的数据点(比如噪点),对划分的影响。

如何来表示这些影响呢?

被错分的点,离自己应当存在的区域越远,就代表了,这个点“错”得越严重。

所以我们引入【OpenCV2.4】SVM处理线性不可分的例子,为对应样本离同类区域的距离。

【OpenCV2.4】SVM处理线性不可分的例子

接下来的问题是,如何将这种错的程度,转换为和原模型相同的度量呢?

我们再引入一个常量C,表示【OpenCV2.4】SVM处理线性不可分的例子和原模型度量的转换关系,用C对【OpenCV2.4】SVM处理线性不可分的例子进行加权和,来表征错分点对原模型的影响,这样我们得到新的最优化问题模型:

    【OpenCV2.4】SVM处理线性不可分的例子

关于参数C的选择, 明显的取决于训练样本的分布情况。 尽管并不存在一个普遍的答案,但是记住下面几点规则还是有用的:

  • C比较大时分类错误率较小,但是间隔也较小。 在这种情形下, 错分类对模型函数产生较大的影响,既然优化的目的是为了最小化这个模型函数,那么错分类的情形必然会受到抑制。
  • C比较小时间隔较大,但是分类错误率也较大。 在这种情形下,模型函数中错分类之和这一项对优化过程的影响变小,优化过程将更加关注于寻找到一个能产生较大间隔的超平面。

 说白了,C的大小表征了,错分数据对原模型的影响程度。于是C越大,优化时越关注错分问题。反之越关注能否产生一个较大间隔的超平面。

开始使用

【OpenCV2.4】SVM处理线性不可分的例子

#include <iostream><span>
#include </span><opencv2><span>
#include </span><opencv2><span>
#include </span><opencv2>

<span>#define</span> NTRAINING_SAMPLES   100         <span>//</span><span> 每类训练样本的数量</span>
<span>#define</span> FRAC_LINEAR_SEP     0.9f        <span>//</span><span> 线性可分部分的样本组成比例</span>

<span>using</span> <span>namespace</span><span> cv;
</span><span>using</span> <span>namespace</span><span> std;

</span><span>int</span><span> main(){
    </span><span>//</span><span> 用于显示的数据</span>
    <span>const</span> <span>int</span> WIDTH = <span>512</span>, HEIGHT = <span>512</span><span>;
    Mat I </span>=<span> Mat::zeros(HEIGHT, WIDTH, CV_8UC3);

    </span><span>/*</span><span> 1. 随即产生训练数据 </span><span>*/</span><span>
    Mat trainData(</span><span>2</span>*NTRAINING_SAMPLES, <span>2</span><span>, CV_32FC1);
    Mat labels   (</span><span>2</span>*NTRAINING_SAMPLES, <span>1</span><span>, CV_32FC1);
    
    RNG rng(</span><span>100</span>); <span>//</span><span> 生成随即数

    </span><span>//</span><span> 设置线性可分的训练数据</span>
    <span>int</span> nLinearSamples = (<span>int</span>) (FRAC_LINEAR_SEP *<span> NTRAINING_SAMPLES);

    </span><span>//</span><span> 生成分类1的随机点</span>
    Mat trainClass = trainData.rowRange(<span>0</span><span>, nLinearSamples);
    </span><span>//</span><span> 点的x坐标在[0, 0.4)之间</span>
    Mat c = trainClass.colRange(<span>0</span>, <span>1</span><span>);
    rng.fill(c, RNG::UNIFORM, Scalar(</span><span>1</span>), Scalar(<span>0.4</span> *<span> WIDTH));
    </span><span>//</span><span> 点的y坐标在[0, 1)之间</span>
    c = trainClass.colRange(<span>1</span>,<span>2</span><span>);
    rng.fill(c, RNG::UNIFORM, Scalar(</span><span>1</span><span>), Scalar(HEIGHT));

    </span><span>//</span><span> 生成分类2的随机点</span>
    trainClass = trainData.rowRange(<span>2</span>*NTRAINING_SAMPLES-nLinearSamples, <span>2</span>*<span>NTRAINING_SAMPLES);
    </span><span>//</span><span> 点的x坐标在[0.6, 1]之间</span>
    c = trainClass.colRange(<span>0</span> , <span>1</span><span>); 
    rng.fill(c, RNG::UNIFORM, Scalar(</span><span>0.6</span>*<span>WIDTH), Scalar(WIDTH));
    </span><span>//</span><span> 点的y坐标在[0, 1)之间</span>
    c = trainClass.colRange(<span>1</span>,<span>2</span><span>);
    rng.fill(c, RNG::UNIFORM, Scalar(</span><span>1</span><span>), Scalar(HEIGHT));

    </span><span>/*</span><span> 设置非线性可分的训练数据 </span><span>*/</span>

    <span>//</span><span> 生成分类1和分类2的随机点</span>
    trainClass = trainData.rowRange(  nLinearSamples, <span>2</span>*NTRAINING_SAMPLES-<span>nLinearSamples);
    </span><span>//</span><span> 点的x坐标在[0.4, 0.6)之间</span>
    c = trainClass.colRange(<span>0</span>,<span>1</span><span>);
    rng.fill(c, RNG::UNIFORM, Scalar(</span><span>0.4</span>*WIDTH), Scalar(<span>0.6</span>*<span>WIDTH)); 
    </span><span>//</span><span> 点的y坐标在[0, 1)之间</span>
    c = trainClass.colRange(<span>1</span>,<span>2</span><span>);
    rng.fill(c, RNG::UNIFORM, Scalar(</span><span>1</span><span>), Scalar(HEIGHT));
    
    </span><span>/*</span><span>*/</span><span>
    labels.rowRange(                </span><span>0</span>,   NTRAINING_SAMPLES).setTo(<span>1</span>);  <span>//</span><span> Class 1</span>
    labels.rowRange(NTRAINING_SAMPLES, <span>2</span>*NTRAINING_SAMPLES).setTo(<span>2</span>);  <span>//</span><span> Class 2</span>

    <span>/*</span><span> 设置支持向量机参数 </span><span>*/</span><span>
    CvSVMParams </span><span>params</span><span>;
    </span><span>params</span>.svm_type    =<span> SVM::C_SVC;
    </span><span>params</span>.C           = <span>0.1</span><span>;
    </span><span>params</span>.kernel_type =<span> SVM::LINEAR;
    </span><span>params</span>.term_crit   = TermCriteria(CV_TERMCRIT_ITER, (<span>int</span>)1e7, 1e-<span>6</span><span>);

    </span><span>/*</span><span> 3. 训练支持向量机 </span><span>*/</span><span>
    cout </span>"<span>Starting training process</span><span>"</span>  endl;
    CvSVM svm;
    svm.train(trainData, labels, Mat(), Mat(), <span>params</span><span>);
    cout </span>"<span>Finished training process</span><span>"</span>  endl;
    
    <span>/*</span><span> 4. 显示划分区域 </span><span>*/</span><span>
    Vec3b green(</span><span>0</span>,<span>100</span>,<span>0</span>), blue (<span>100</span>,<span>0</span>,<span>0</span><span>);
    </span><span>for</span> (<span>int</span> i = <span>0</span>; i i)
        <span>for</span> (<span>int</span> j = <span>0</span>; j j){
            Mat sampleMat = (Mat_float>(<span>1</span>,<span>2</span>)  i, j);
            <span>float</span> response =<span> svm.predict(sampleMat);

            </span><span>if</span>      (response == <span>1</span>)    I.at<vec3b>(j, i)  =<span> green;
            </span><span>else</span> <span>if</span> (response == <span>2</span>)    I.at<vec3b>(j, i)  =<span> blue;
        }

    </span><span>/*</span><span> 5. 显示训练数据 </span><span>*/</span>
    <span>int</span> thick = -<span>1</span><span>;
    </span><span>int</span> lineType = <span>8</span><span>;
    </span><span>float</span><span> px, py;
    </span><span>//</span><span> 分类1</span>
    <span>for</span> (<span>int</span> i = <span>0</span>; i i){
        px = trainData.atfloat>(i,<span>0</span><span>);
        py </span>= trainData.atfloat>(i,<span>1</span><span>);
        circle(I, Point( (</span><span>int</span>) px,  (<span>int</span>) py ), <span>3</span>, Scalar(<span>0</span>, <span>255</span>, <span>0</span><span>), thick, lineType);
    }
    </span><span>//</span><span> 分类2</span>
    <span>for</span> (<span>int</span> i = NTRAINING_SAMPLES; i 2*NTRAINING_SAMPLES; ++<span>i){
        px </span>= trainData.atfloat>(i,<span>0</span><span>);
        py </span>= trainData.atfloat>(i,<span>1</span><span>);
        circle(I, Point( (</span><span>int</span>) px, (<span>int</span>) py ), <span>3</span>, Scalar(<span>255</span>, <span>0</span>, <span>0</span><span>), thick, lineType);
    }

    </span><span>/*</span><span> 6. 显示支持向量 */</span>
    thick = <span>2</span><span>;
    lineType  </span>= <span>8</span><span>;
    </span><span>int</span> x     =<span> svm.get_support_vector_count();

    </span><span>for</span> (<span>int</span> i = <span>0</span>; i i)
    {
        <span>const</span> <span>float</span>* v =<span> svm.get_support_vector(i);
        circle( I,  Point( (</span><span>int</span>) v[<span>0</span>], (<span>int</span>) v[<span>1</span>]), <span>6</span>, Scalar(<span>128</span>, <span>128</span>, <span>128</span><span>), thick, lineType);
    }

    imwrite(</span><span>"</span><span>result.png</span><span>"</span>, I);                      <span>//</span><span> 保存图片</span>
    imshow(<span>"</span><span>SVM线性不可分数据划分</span><span>"</span>, I); <span>//</span><span> 显示给用户</span>
    waitKey(<span>0</span><span>);
}</span></vec3b></vec3b></opencv2></opencv2></opencv2></iostream>

【OpenCV2.4】SVM处理线性不可分的例子

设置SVM参数

这里的参数设置可以参考一下的API。

<span>CvSVMParams</span> <span>params</span><span>;</span>
<span>params</span><span>.</span><span>svm_type</span>    <span>=</span> <span>SVM</span><span>::</span><span>C_SVC</span><span>;</span>
<span>params</span><span>.</span><span>C</span>              <span>=</span> <span>0.1</span><span>;</span>
<span>params</span><span>.</span><span>kernel_type</span> <span>=</span> <span>SVM</span><span>::</span><span>LINEAR</span><span>;</span>
<span>params</span><span>.</span><span>term_crit</span>   <span>=</span> <span>TermCriteria</span><span>(</span><span>CV_TERMCRIT_ITER</span><span>,</span> <span>(</span><span>int</span><span>)</span><span>1e7</span><span>,</span> <span>1e-6</span><span>);</span>

 可以看到,这次使用的是C类支持向量分类机。其参数C的值为0.1。

 结果

  • 程序创建了一张图像,在其中显示了训练样本,其中一个类显示为浅绿色圆圈,另一个类显示为浅蓝色圆圈。
  • 训练得到SVM,并将图像的每一个像素分类。 分类的结果将图像分为蓝绿两部分,中间线就是最优分割超平面。由于样本非线性可分, 自然就有一些被错分类的样本。 一些绿色点被划分到蓝色区域, 一些蓝色点被划分到绿色区域。
  • 最后支持向量通过灰色边框加重显示。

【OpenCV2.4】SVM处理线性不可分的例子

被山寨的原文

Support Vector Machines for Non-Linearly Separable Data . OpenCV.org

성명:
본 글의 내용은 네티즌들의 자발적인 기여로 작성되었으며, 저작권은 원저작자에게 있습니다. 본 사이트는 이에 상응하는 법적 책임을 지지 않습니다. 표절이나 침해가 의심되는 콘텐츠를 발견한 경우 admin@php.cn으로 문의하세요.