Home  >  Article  >  WeChat Applet  >  Let’s see how good you look! Public account developed based on Python

Let’s see how good you look! Public account developed based on Python

php是最好的语言
php是最好的语言Original
2018-07-25 13:56:172259browse

This is a Python-based WeChat public account development for appearance detection. Today we analyze the user's pictures through Tencent's AI platform and then return them to the user. Let’s experience the beauty test of public accounts together

Rendering

Let’s see how good you look! Public account developed based on Python

Let’s see how good you look! Public account developed based on Python

Let’s see how good you look! Public account developed based on Python

##1. Access to Tencent AI platform

Let’s first take a look at the description of the official face detection and analysis interface:

Detect all faces (Face) in a given picture (Image) location and corresponding facial attributes. The position includes (x, y, w, h), and the facial attributes include gender, age, expression, beauty, glasses and posture (pitch, roll, yaw).

The request parameters include the following:

  • app_id application identification, we can get the app_id after registering on the AI ​​platform

  • time_stamp timestamp

  • nonce_str random string

  • sign signature information, we need to calculate it ourselves

  • image Images to be detected (upper limit 1M)

  • mode Detection mode

1. Interface authentication, constructing request parameters

The official gave us the calculation method for interface authentication.

  1. Sort the request parameter pairs in ascending dictionary order by key to obtain an ordered list of parameter pairs N

  2. The parameter pairs in list N are spliced ​​into strings in the format of URL key-value pairs to obtain string T (for example: key1=value1&key2=value2). The value part of the URL key-value splicing process requires URL encoding. The URL encoding algorithm uses capital letters. For example, instead of lowercase �

  3. , use app_key as the key name of the application key to form a URL key value and splice it to the end of the string T to obtain the string S (such as: key1=value1&key2 =value2&app_key=Key)

  4. Perform MD5 operation on the string S, convert all characters of the obtained MD5 value into uppercase, and obtain the interface request signature

2. Request the interface address

Request the interface information. We use requests to send the request, and we will get the returned image information in json format.

pip install requestsInstall requests.

3. Process the returned information

Process the returned information, display the information on the picture, and then save the processed picture. Here we use opencv and pillow libraries

pip install pillow and pip install opencv-python to install.

Start writing code. We create a new face_id.py file to connect to the AI ​​platform and return the detected image data.

import time
import random
import base64
import hashlib
import requests
from urllib.parse import urlencode
import cv2
import numpy as np
from PIL import Image, ImageDraw, ImageFont
import os


# 一.计算接口鉴权,构造请求参数

def random_str():
    '''得到随机字符串nonce_str'''
    str = 'abcdefghijklmnopqrstuvwxyz'
    r = ''
    for i in range(15):
        index = random.randint(0,25)
        r += str[index]
    return r


def image(name):
    with open(name, 'rb') as f:
        content = f.read()
    return base64.b64encode(content)


def get_params(img):
    '''组织接口请求的参数形式,并且计算sign接口鉴权信息,
    最终返回接口请求所需要的参数字典'''
    params = {
        'app_id': '1106860829',
        'time_stamp': str(int(time.time())),
        'nonce_str': random_str(),
        'image': img,
        'mode': '0'

    }

    sort_dict = sorted(params.items(), key=lambda item: item[0], reverse=False)  # 排序
    sort_dict.append(('app_key', 'P8Gt8nxi6k8vLKbS'))  # 添加app_key
    rawtext = urlencode(sort_dict).encode()  # URL编码
    sha = hashlib.md5()
    sha.update(rawtext)
    md5text = sha.hexdigest().upper()  # 计算出sign,接口鉴权
    params['sign'] = md5text  # 添加到请求参数列表中
    return params

# 二.请求接口URL


def access_api(img):
    frame = cv2.imread(img)
    nparry_encode = cv2.imencode('.jpg', frame)[1]
    data_encode = np.array(nparry_encode)
    img_encode = base64.b64encode(data_encode)  # 图片转为base64编码格式
    url = 'https://api.ai.qq.com/fcgi-bin/face/face_detectface'
    res = requests.post(url, get_params(img_encode)).json()  # 请求URL,得到json信息
    # 把信息显示到图片上
    if res['ret'] == 0:  # 0代表请求成功
        pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))  # 把opencv格式转换为PIL格式,方便写汉字
        draw = ImageDraw.Draw(pil_img)
        for obj in res['data']['face_list']:
            img_width = res['data']['image_width']  # 图像宽度
            img_height = res['data']['image_height']  # 图像高度
            # print(obj)
            x = obj['x']  # 人脸框左上角x坐标
            y = obj['y']  # 人脸框左上角y坐标
            w = obj['width']  # 人脸框宽度
            h = obj['height']  # 人脸框高度
            # 根据返回的值,自定义一下显示的文字内容
            if obj['glass'] == 1:  # 眼镜
                glass = '有'
            else:
                glass = '无'
            if obj['gender'] >= 70:  # 性别值从0-100表示从女性到男性
                gender = '男'
            elif 50 <= obj['gender'] < 70:
                gender = "娘"
            elif obj['gender'] < 30:
                gender = '女'
            else:
                gender = '女汉子'
            if 90 < obj['expression'] <= 100:  # 表情从0-100,表示笑的程度
                expression = '一笑倾城'
            elif 80 < obj['expression'] <= 90:
                expression = '心花怒放'
            elif 70 < obj['expression'] <= 80:
                expression = '兴高采烈'
            elif 60 < obj['expression'] <= 70:
                expression = '眉开眼笑'
            elif 50 < obj['expression'] <= 60:
                expression = '喜上眉梢'
            elif 40 < obj['expression'] <= 50:
                expression = '喜气洋洋'
            elif 30 < obj['expression'] <= 40:
                expression = '笑逐颜开'
            elif 20 < obj['expression'] <= 30:
                expression = '似笑非笑'
            elif 10 < obj['expression'] <= 20:
                expression = '半嗔半喜'
            elif 0 <= obj['expression'] <= 10:
                expression = '黯然伤神'
            delt = h // 5  # 确定文字垂直距离
            # 写入图片
            if len(res['data']['face_list']) > 1:  # 检测到多个人脸,就把信息写入人脸框内
                font = ImageFont.truetype('yahei.ttf', w // 8, encoding='utf-8')  # 提前把字体文件下载好
                draw.text((x + 10, y + 10), '性别 :' + gender, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 1), '年龄 :' + str(obj['age']), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 2), '表情 :' + expression, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 3), '魅力 :' + str(obj['beauty']), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 4), '眼镜 :' + glass, (76, 176, 80), font=font)
            elif img_width - x - w < 170:  # 避免图片太窄,导致文字显示不完全
                font = ImageFont.truetype('yahei.ttf', w // 8, encoding='utf-8')
                draw.text((x + 10, y + 10), '性别 :' + gender, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 1), '年龄 :' + str(obj['age']), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 2), '表情 :' + expression, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 3), '魅力 :' + str(obj['beauty']), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 4), '眼镜 :' + glass, (76, 176, 80), font=font)
            else:
                font = ImageFont.truetype('yahei.ttf', 20, encoding='utf-8')
                draw.text((x + w + 10, y + 10), '性别 :' + gender, (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 1), '年龄 :' + str(obj['age']), (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 2), '表情 :' + expression, (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 3), '魅力 :' + str(obj['beauty']), (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 4), '眼镜 :' + glass, (76, 176, 80), font=font)

            draw.rectangle((x, y, x + w, y + h), outline="#4CB050")  # 画出人脸方框
            cv2img = cv2.cvtColor(np.array(pil_img), cv2.COLOR_RGB2BGR)  # 把 pil 格式转换为 cv
            cv2.imwrite('faces/{}'.format(os.path.basename(img)), cv2img)  # 保存图片到 face 文件夹下
            return '检测成功'
    else:
        return '检测失败'

At this point our face detection interface access and image processing are completed. After receiving the picture information sent by the user, call this function and return the processed picture to the user.

Return the picture to the user

When receiving the user picture, the following steps are required:

Save the picture

After receiving the user picture, We need to save the image first, and then we can call the face analysis interface and transfer the image information. We need to write an img_download function to download the image. See the code below for details

Calling the face analysis interface

After downloading the image, call the interface function in the face_id.py file to get the processed image.

Upload pictures

The detection result is a new picture. To send the picture to the user, we need a Media_ID. To obtain the Media_ID, we must first upload the picture as a temporary material, so here we need An img_upload function is used to upload images, and an access_token is required when uploading. We obtain it through a function.

To obtain the access_token, we must add our own IP address to the whitelist, otherwise it will not be obtained. Please log in to "WeChat Public Platform-Development-Basic Configuration" to add the server IP address to the IP whitelist in advance. You can view the IP of this machine at http://ip.qq.com/...

Start writing code, we create a new utils.py to download and upload pictures

import requests
import json
import threading
import time
import os

token = ''
app_id = 'wxfc6adcdd7593a712'
secret = '429d85da0244792be19e0deb29615128'


def img_download(url, name):
    r = requests.get(url)
    with open('images/{}-{}.jpg'.format(name, time.strftime("%Y_%m_%d%H_%M_%S", time.localtime())), 'wb') as fd:
        fd.write(r.content)
    if os.path.getsize(fd.name) >= 1048576:
        return 'large'
    # print('namename', os.path.basename(fd.name))
    return os.path.basename(fd.name)


def get_access_token(appid, secret):
    '''获取access_token,100分钟刷新一次'''

    url = 'https://api.weixin.qq.com/cgi-bin/token?grant_type=client_credential&appid={}&secret={}'.format(appid, secret)
    r = requests.get(url)
    parse_json = json.loads(r.text)
    global token
    token = parse_json['access_token']
    global timer
    timer = threading.Timer(6000, get_access_token)
    timer.start()


def img_upload(mediaType, name):
    global token
    url = "https://api.weixin.qq.com/cgi-bin/media/upload?access_token=%s&type=%s" % (token, mediaType)
    files = {'media': open('{}'.format(name), 'rb')}
    r = requests.post(url, files=files)
    parse_json = json.loads(r.text)
    return parse_json['media_id']

get_access_token(app_id, secret)

Return to the user

We simply modify the logic after receiving the picture, after receiving the picture Face detection, upload to obtain Media_ID, all we have to do is return the image to the user. Directly look at the code of connect.py

import falcon
from falcon import uri
from wechatpy.utils import check_signature
from wechatpy.exceptions import InvalidSignatureException
from wechatpy import parse_message
from wechatpy.replies import TextReply, ImageReply

from utils import img_download, img_upload
from face_id import access_api


class Connect(object):

    def on_get(self, req, resp):
        query_string = req.query_string
        query_list = query_string.split('&')
        b = {}
        for i in query_list:
            b[i.split('=')[0]] = i.split('=')[1]

        try:
            check_signature(token='lengxiao', signature=b['signature'], timestamp=b['timestamp'], nonce=b['nonce'])
            resp.body = (b['echostr'])
        except InvalidSignatureException:
            pass
        resp.status = falcon.HTTP_200

    def on_post(self, req, resp):
        xml = req.stream.read()
        msg = parse_message(xml)
        if msg.type == 'text':
            reply = TextReply(content=msg.content, message=msg)
            xml = reply.render()
            resp.body = (xml)
            resp.status = falcon.HTTP_200
        elif msg.type == 'image':
            name = img_download(msg.image, msg.source)  # 下载图片
            r = access_api('images/' + name)
            if r == '检测成功':
                media_id = img_upload('image', 'faces/' + name)  # 上传图片,得到 media_id
                reply = ImageReply(media_id=media_id, message=msg)
            else:
                reply = TextReply(content='人脸检测失败,请上传1M以下人脸清晰的照片', message=msg)
            xml = reply.render()
            resp.body = (xml)
            resp.status = falcon.HTTP_200

app = falcon.API()
connect = Connect()
app.add_route('/connect', connect)

Now our work is done, and our official account can be tested for appearance. I originally planned to use it on my official account, but there are still several problems below, so I didn’t use it.

  1. WeChat’s mechanism, our program must respond within 5s. Otherwise, it will report 'The service provided by the official account is faulty'. However, image processing is sometimes slow, often exceeding 5 seconds. Therefore, the correct processing method should be to return an empty string immediately after receiving the user's request to indicate that we have received it, and then create a separate thread to process the image. When the image is processed, it is sent to the user through the customer service interface. Unfortunately, uncertified public accounts do not have a customer service interface, so there is nothing you can do. If it takes more than 5 seconds, an error will be reported.

  2. The menu cannot be customized. Once custom development is enabled, the menu also needs to be customized. However, uncertified official accounts do not have permission to configure the menu through the program and can only configure it in the WeChat background. configuration.

So, I have not enabled this program on my official account, but if you have a certified official account, you can try to develop various fun functions.

Related recommendations:

WeChat public platform development One-click follow WeChat public platform account

WeChat public platform development attempt, WeChat public platform

Video: Chuanzhi and Dark Horse WeChat public platform development video tutorial

The above is the detailed content of Let’s see how good you look! Public account developed based on Python. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn