Synology Cloud Station Uploading

Uploading Performance (5GB x 20 File)

Synology Cloud Station the third graph shows the time needed for a Windows i7 computer to upload 20 5GB files in size to a different NAS server. During the file upload process, the Cloud Station PC Client calculates the file hash, uploads it, and the Cloud Station server calculates the file hash again to ensure file integrity. This means that the actual time required to upload files is theoretically conditioned by many factors Client PC Cloud Station calculation speed, hard drive speed installed on Windows computers, network speed between Windows PCs and NAS, NAS writing speed and calculating Cloud Station server speed.

From the test results, we can see that there are not many different performance models. This is not surprising because the upload performance on LAN is the first and tied to the main PC HDD. That is, most NAS Synology models are able to process more data at certain times than the number of hard drives on the client computer provides. The only exception here is DS214se, which is the only single core model about twice the time to complete upload tasks than others. For most models (not including DS214se), every second about 30 ~ 40MB of data is processed and uploaded, which is about half the CIFS transfer speed that should be.

This is because Cloud Station requires that each uploaded file be scanned twice (once to find a difference and once again to upload it) on the client computer. Uploading Performance (1MB x 100K File). We can see that the main upload performance is NAS HDD I / O bound, because the results are grouped based on the number of HDDs installed in the NAS. RS3614xs + and DS3615xs (both 10 HDDs in RAID 5) have very similar numbers, and so do DS414 and DS215j (2 HDDs in RAID 0).

DS1515 + was found between these two sets, and finally the DS214se again was the only exception in this test item, whose computational power failed to catch HDD speed. Based on this, we can expect RS3614xs + (12-bay), DS3615xs (12-bay), DS1515 + (5-bay) and DS414 (4-bay) to work better with all Bays drives loaded full, because more disks available to be written simultaneously, the better the appearance.

Measuring Success

The only way to ensure we make improvements to our code is to measure a bad situation and then compare the measurements with others after we apply our improvements. In other words, unless we know how much the "solution" helps us (if there is one), we cannot know whether it is really a solution or not. There are two metrics that we can care about. The first is CPU usage. How fast or slow is the process we want to do The second is memory usage.

How much memory does the script need to execute? This is often inversely proportional - meaning we can reduce memory usage at the expense of CPU usage, and vice versa. In asynchronous execution models (such as with multi-process or multi-threaded PHP applications), both CPU and memory usage are important considerations. In traditional PHP architecture, this is generally a problem when one reaches the server limit.

It is not practical to measure CPU usage in PHP. If that is the area that you want to focus on, consider using something like top, on Ubuntu or macOS. For Windows, consider using a Linux Subsystem, so you can use top on Ubuntu. For the purposes of this tutorial, we will measure memory usage.

Discussion: