Home >WeChat Applet >Mini Program Development >24-hour development of Onmyoji mini program

24-hour development of Onmyoji mini program

巴扎黑
巴扎黑Original
2017-04-01 15:26:122390browse

0. Preface

Liver Emperors who play Onmyoji all know that the seal will be refreshed twice every day at 5 a.m. and 6 p.m. Tasks, the most annoying thing every time you do a task is to find the corresponding copies of various monsters and mysterious clues. Onmyoji provides NetEase Genie for some data queries, but the experience is too touching, so most people choose to use search engines to search for monster distribution and mysterious clues.

It is very inconvenient to use search engines every time, so I decided to write a small program to query the distribution of Onmyoji monsters. Strive to achieve a faster experience using the shortcut, leaving more time for dog food and soul control.

I happened to have two days last weekend, so I started writing immediately.

1. Conception and Design (3 hours)1.1 Conception

  • The main function of the mini program to be made is the query function, so the homepage should be as concise as a search engine, and a search box is definitely needed;

  • The homepage contains popular searches and caches the most popular shikigami. Search;

  • Search supports complete match or single word match;

  • Click on the search result to jump directly to the Shikigami details page; 53. Shikigami The details page should include the shikigami's illustration, name, rarity, and haunting locations, and the haunting locations are sorted from the most to the least number of monsters;

  • Add the function of reporting data errors and making suggestions;

  • Supports the user’s personal search history;

  • The name of the mini program was finally decided to be called Shikigami Hunter after considering the functions of the mini program (actually This was thought up after the final development was completed);

1.2 Design

After I conceived the idea, I started to design a sketch using my half-assed PS skills, which probably looked like this:


Well, the most important homepage and details page are designed, and then you can start to think about how to do it!

1.3 Technical Architecture

  • The front end is undoubtedly the WeChat applet;

  • The backend uses Django to provide Restful API services;

  • The current hottest search uses redis as the cache server for caching;

  • Personal search records use the localstorage provided by the WeChat applet;

  • Shikishen distribution information is crawled and cleaned using a crawler, formatted into json, and manually checked before being stored in the database;

  • Shikigami pictures and icons directly crawl official information;

  • Make your own pictures and icons of shikigami that cannot be crawled;

  • The mini program requires HTTPS connection, which I happened to have done before, you can look here directlyHTTPS Free Deployment Guide

At this point, after the preparations for the official development are in place, we can start the official development

2. API service development (5 hours)

I have often done Django API service development before, so I have a relatively complete solution, you can refer to heredjango-simple-serializer

The reason why it took 5 hours is because 4 hours in the process of adding django-simple-serializer support for the through feature in Django ManyToManyField.

In short, the through feature allows you to add some additional fields or attributes to the intermediate table of a many-to-many relationship, for example: a many-to-many relationship between monster copies and monsters It is necessary to add a field count that stores the number of corresponding monsters in each copy.

## After getting through support, the API construction will be very fast. There are five main APIs:

  • Search interface;

  • Shikigami details interface;

  • Shikigami copy interface;

  • Popular search interface;

  • Feedback interface;


After writing the interface, add some mock data for testing;

3. Front-end development (8 hours)

The front end took the longest time.

On the one hand, the author is really a back-end engineer, and the front-end is a halfway monk. On the other hand, the mini program has some pitfalls. Of course, the most important thing is to adjust the interface effect all the time, which takes a lot of time.

The overall experience of writing small programs is exactly the same as writing vue.js, except that some html tags cannot be used. , but need to be written according to the components officially provided by the mini program. One thing I feel here is that the component design idea of ​​the mini program itself should be based on React, and the syntax should be based on vue.js.

After the front-end development is completed, it is mainly divided into these pages:

  • Home page (search page);

  • Shikigami details page;

  • My page (mainly for Search history and disclaimer, etc.);

  • Feedback interface;

  • Declaration interface (Why do you need this interface? Because all pictures and some The resources are directly grabbed from the official resources of Onmyoji, so it needs to be stated here that they are only used for non-profit purposes, and any messy copyrights still belong to Onmyoji).


##Hey, the ugly daughter-in-law will meet her parents-in-law sooner or later, so I have to put the finally developed interface diagram here


##For WeChat I won’t go into detail about the introduction and basics of mini programs here. I believe that developers who are currently interested in WeChat mini programs will have no problem writing a simple demo on their own. I will mainly talk about my Pitfalls encountered during development: ##3.1 background-image attribute

In When writing the Shikigami details page, you need to use the background-image attribute to set the background image in two places. Everything displays normally in the WeChat developer tools, but once it is debugged on a real machine, it cannot be displayed. Finally, the background-image of the applet is found. The real machine does not support referencing local resources. There are two solutions:

Use network images: Considering the size of the background image, the author gave up. This solution;

  • uses base64 to encode the image.


Normally speaking, the background-image in CSS supports base64. This solution is equivalent to encoding the image directly with base64. A section of base64 code is stored, and you can use it like this when using it:


##[CSS]

Plain text view Copy code

background-image: url(data:image/image-format;base64,XXXX);

image-format is the format of the image itself, and xxxx is the encoding of the image after base64. This method is actually a disguised way of referencing local resources. The advantage is that it can reduce the number of image requests, but the disadvantage is that it will increase the size of the css file and make it not so beautiful.

In the end, the author chose the second method mainly because the size of the picture and the increase of wxss were within the acceptable range.

3.2 template

The applet supports templates, but please note that templates have their own Scope, only the data passed in data can be used.

In addition, when passing in data, the relevant data needs to be deconstructed and passed in. Inside the template, it is directly represented by {{ xxxx }} Access in the form of {{ item.xxx }} in a loop;

About deconstruction:


#[XML ]

Plain text view

Copy code

<template is="xxx" data="{{...object}}"/>

##Three. It is the deconstruction operation;

Generally, the template will be placed in a separate template file for other files to call, instead of being written directly in normal wxml. For example, the author's directory probably looks like this:

##[JavaScript]

Plain text view

Copy code

├── app.js
├── app.json
├── app.wxss
├── pages
│   ├── feedback
│   ├── index
│   ├── my
│   ├── onmyoji
│   ├── statement
│   └── template
│       ├── template.js
│       ├── template.json
│       ├── template.wxml
│       └── template.wxss
├── static
└── utils

For other files to call template, use it directly Just import:

##[XML]

Plain text view


Copy code

<import src="../template/template.wxml" />
Then where you need to quote the template:

#[XML] Plain text view

Copy code

<template is="xxx" data="{{...object}}"/>

这里遇到另一个问题,template 对应的样式写在 template 对应的 wxss 中并没有作用,需要写在调用 template 的文件的 wxss 中,比如 index 需要使用 template 则需要将对应的 css 写在 my/my.wxss 中。

4. 爬取图片资源 ( 2小时 )

式神的图标及形象图基本上阴阳师官网都有,这里自己做也不现实,所以果断写爬虫爬下来然后存到自己的 cdn 。

大图和小图都在 http://yys.163.com/shishen/index.html 这里可以找到。 一开始考虑爬取网页然后 beautiful soup 提取数据,后面发现式神数据竟然是异步加载的,那就更简单了,分析网页得到 https://g37simulator.webapp.163.com/get_heroid_list 直接返回了式神信息的 json 信息,所以很容易写个爬虫就可以搞定了:


[Python] 纯文本查看 复制代码

# coding: utf-8
import json
import requests
import urllib
from xpinyin import Pinyin

url = "https://g37simulator.webapp.163.com/get_heroid_list?callback=jQuery11130959811888616583_1487429691764&rarity=0&page=1&per_page=200&_=1487429691765"
result = requests.get(url).content.replace(&#39;jQuery11130959811888616583_1487429691764(&#39;, &#39;&#39;).replace(&#39;)&#39;, &#39;&#39;)
json_data = json.loads(result)
hellspawn_list = json_data[&#39;data&#39;]
p = Pinyin()
for k, v in hellspawn_list.iteritems():
    file_name = p.get_pinyin(v.get(&#39;name&#39;), &#39;&#39;)
    print &#39;id: {0} name: {1}&#39;.format(k, v.get(&#39;name&#39;))
    big_url = "https://yys.res.netease.com/pc/zt/20161108171335/data/shishen_big/{0}.png".format(k)
    urllib.urlretrieve(big_url, filename=&#39;big/{0}@big.png&#39;.format(file_name))
    avatar_url = "https://yys.res.netease.com/pc/gw/20160929201016/data/shishen/{0}.png".format(k)
    urllib.urlretrieve(avatar_url, filename=&#39;icon/{0}@icon.png&#39;.format(file_name))

然而,爬完数据后发现一个问题,网易官方的图片都是无码高清大图,对于笔者这种穷 ds 大图放在 cdn 上两天就得破产,所以需要批量将图片转成既不太大又能看的过去。嗯,这里就可以用到 ps 的批处理能力了。

  • 打开 ps ,然后选择爬到的一张图片;

  • 选择菜单栏上的“窗口”然后选择“动作;

  • 在“动作”选项下,新建一个动作;

  • 点击圆形录制按钮开始录制动作;

  • 按正常处理图片等顺序将一张图片存为 web 格式;

  • 点击方形停止按钮停止录制动作;

  • 选择菜单栏上的 文件-自动-批处理-选择之前录制的动作-配置好输入文件夹和输出文件夹;

  • 点击确定就可以啦;

等批处理结束,期间刷个御魂啥的应该就好了,然后将得到的所有图片上传到静态资源服务器,图片这里就处理完啦。

5. 式神数据爬取 ( 4小时 )

式神分布数据网上比较杂并且数据很多有偏差,所以斟酌再三决定采用半人工半自动的方式,爬到的数据输出为 json:

[JavaScript]

{
    "scene_name": "探索第一章",
    "team_list": [{
        "name": "天邪鬼绿1",
        "index": 1,
        "monsters": [{
            "name": "天邪鬼绿",
            "count": 1
            },{
            "name": "提灯小僧",
            "count": 2
            }]
        },{
        "name": "天邪鬼绿2",
        "index": 2,
        "monsters": [{
            "name": "天邪鬼绿",
            "count": 1
            },{
            "name": "提灯小僧",
            "count": 2
            }]
        },{
        "name": "提灯小僧1",
        "index": 3,
        "monsters": [{
            "name": "天邪鬼绿",
            "count": 2
            },{
            "name": "提灯小僧",
            "count": 1
            }]
        },{
        "name": "提灯小僧2",
        "index": 4,
        "monsters": [{
            "name": "灯笼鬼",
            "count": 2
            },{
            "name": "提灯小僧",
            "count": 1
            }]
        },{
        "name": "首领",
        "index": 5,
        "monsters": [{
            "name": "九命猫",
            "count": 3
            }]
        }]
}

然后再人工检查一遍,当然还是会有遗漏,所以数据报错的功能就很重要啦。

这一部分实际写代码的时间可能只有半个多小时,剩下时间一直在检查数据;

一切检查结束后写个脚本直接将 json 导入到数据库中,检查无误后用 fabric 发布到线上服务器进行测试;

6. 测试 ( 2小时 )

最后一步基本上就是在手机上体验查错,修改一些效果,关闭调试模式准备提交审核;

It’s already Sunday, oh, no, it should be one o’clock on Monday morning:


I have to say that the review speed of the mini program team is very fast, Monday afternoon It passed the review and then went online decisively.

Last rendering:



The above is the detailed content of 24-hour development of Onmyoji mini program. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn