My Experience of Picture Bed Migration

The cause was this text message:

【Qiniu Cloud Service】Dear Qiniu Cloud CDN user, the certificates bound to the three domain names article-assets.lynan.cn under your account **@**.com are about to expire. To ensure the normal use of your service, please replace the certificates as soon as possible. Please refer to the email content for details. You can log in to “Message Settings/Product Messages/CDN Related Notifications” to turn off SMS notifications. Please ignore if the certificate has been abandoned.

I received a text message from Qiniu saying that my free certificate has expired and I need to reapply and upload an SSL certificate. I just migrated my blog to CloudFlare recently and it was a good experience, so I decided to migrate my image hosting too.

In summary, the benefits of migration are:

  1. Free
    (The free package includes resources within 10GB. At present, I will definitely not use the limit because my resources are manually compressed before uploading. Of course, if it is really not enough, it is very cheap to exceed the limit)
  2. Global CDN
  3. SSL
    If I continue to use Qiniu, I’m a bit lazy to configure it once a year 😅

Collect current image resources

Since I created a separate Bucket for my blog and configured a separate domain name, it is easy to find. Just run a script:

My blog is generated by Hexo, and the original files are in the source folder.

getlinks.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
const { resolve } = require("path");
const { readFileSync, writeFileSync, promises } = require("fs");
const { readdir } = promises;

const directory = "source";

async function getFiles(dir) {
const dirents = await readdir(dir, { withFileTypes: true });
const files = await Promise.all(
dirents.map((dirent) => {
const res = resolve(dir, dirent.name);
return dirent.isDirectory() ? getFiles(res) : res;
})
);
return Array.prototype.concat(...files);
}

const getLinks = () => {
const res = {};

const checkContent = (files) => {
files.forEach(filePath=>{
const fileContent = readFileSync(filePath, "utf8");
const matchRes = fileContent.match(
/https:\/\/article-assets.lynan.cn[^"'()>\s]+/gi
);
if (matchRes && matchRes.length) {
matchRes.forEach(link => {
let linkText = link;
if (linkText.includes("?")) {
linkText = linkText.split("?")[0];
}
if (!res[linkText]) {
res[linkText] = 1;
}
})
}
})
};

getFiles(directory).then((filePath) => {
const files = filePath.filter((item) => !item.endsWith(".DS_Store"));
checkContent(files);
const links = Object.keys(res).join("\n");
writeFileSync("qiniuLinks.txt", links);
});
};

getLinks();

Download resources to local

I was shocked. Qiniu’s web backend can only download one by one. Batch download actually means opening n tabs, which is a bit primitive.
Qiniu has SDK, but as a cutting picturescript guy, I’d better write another script. .

The purpose of the script here is to preserve the directory hierarchy of the original file, so that after uploading, I only need to replace the original resource URL with the latest domain name. (Or directly resolve the domain name of the original Bucket to the new image storage Bucket).

downloadfiles.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
const axios = require("axios");
const fs = require("fs");
const _ = require("lodash");

const files = fs.readFileSync("files.txt", "utf-8").split("\n");
const directory = "files";

const domain = "https://article-assets.lynan.cn/";

let errorFiles = ''

const downloadFile = async (filePath) => {
try {
let paths = _.compact(filePath.replace(domain, "").split("/"));
paths.length = paths.length - 1;
console.log(paths);
if (!fs.existsSync(`${directory}/${paths.join("/")}`)) {
fs.mkdirSync(`${directory}/${paths.join("/")}`, { recursive: true });
}
const file = await axios.get(filePath, { responseType: "stream" });
file.data.pipe(
fs.createWriteStream(`${directory}/${filePath.replace(domain, "")}`)
);
} catch (error) {
console.log(filePath);
errorFiles += filePath
return true;
}
};

const task = async () => {
for (let i = 0; i <= files.length; i++) {
await downloadFile(files[i]);
}
if(errorFiles){
fs.writeFileSync('errorFiles.txt', errorFiles)
}
};

task();

Uploading to CloudFlare R2

If there is no Bucket, create a new Bucket first.

Drag the downloaded file into the upload.

I just used VS Code’s “Find and Replace” to replace it.

I opened it and checked it. It was smooth and OK. After the migration, as long as my domain name is still there, my blog and CloudFlare will live and die together 😄

Why not use a third-party image hosting service?

Actually, I used it and paid for it. The main reason is that I am afraid that it will suddenly run away at any time, and it is not as reliable as a large company.

It feels a bit unreliable because the image hosting service I use seems to have been migrated several times (now there are multiple 302 redirects when accessing the original link. Although it does not affect access, the slowdown is quite obvious). I also encountered intermittent unavailability several times during use, so I am a bit worried about its stability.

R2 Cache Header Settings

After adding a custom domain name to R2, the default configuration is: Cache-Control: public, max-age=0, must-revalidate.
For me, my current R2 bucket is used to store files that will not change (pictures, videos, etc.). If changes are needed, I will re-upload using a new URL.

So I want to change max-age to one month.

Enter the CloudFlare domain backend: /rules/transform-rules/modify-response-header

Add a rule to add a response header for the domain name of this bucket:

1
2
3
4
5
6
7
8
9
10
11
12
If...
When incoming requests match…

Custom filter expression

When incoming requests match…

Hostname equals r2-assets.lynan.cn

Then...

Add Cache-Control = max-age=43200000

PS. I didn’t intend to write this article, but thinking back on my years in the industry, I still write scripts quite often, so it would be interesting to share some.