8. JetStor SAN / NAS Platform – One Architecture
Datastore
Backup
Disaster Recovery
XCubeNAS
File Storage
Production
Datastore
Backup
Surveillance
XCubeSAN
Hybrid-Flash Block Storage
Datastore
Backup
Production
XCubeNXT
Unified Storage
Production
Datastore
X Series
All-Flash Block Storage
9. X Series Gen2
XCubeSAN/X Series
XCubeNAS
Availability
Performance
XCubeNXT
JetStor Product Positioning
Hybrid Flash Unified Storage
Hybrid / All-Flash Block Storage
File Storage
NVMe All-Flash Block Storage
10. 4U 24-bay3U 16-bay2U 12-bay 2U 26-bay
XS5224D
XS5224S
XS5216D
XS5216S
XS5212D
XS5212S
XS5226D
XS5226S
XS5200 Series
(Intel® Xeon D-1527 4 Cores, 8GB to
128GB DDR4 Memory Per Controller)
XS3224D
XS3224S
XS3216D
XS3216S
XS3212D
XS3212S
XS3226D
XS3226S
XS3200 Series
(Intel® D-1517 4 Cores, 4GB to 128GB
DDR4 Memory Per Controller)
XS1224D
XS1224S
XS1216D
XS1216S
XS1212D
XS1212S
XS1226D
XS1226S
XS1200 Series
(Intel® D-1508 2 Cores, 4GB to 32GB
DDR4 Memory Per Controller)
JetStor SAN Product Line
Optional Host Cards and Accessories:
NIC Slot 1 NIC Slot 2
2x 12Gb SAS Expansion Ports2x 10GbE iSCSI (BASE-T) Ports
PSU 1 PSU 2Management Port
Flash Module Port
LFF SFF
D: Dual Controller
S: Single Upgradable
XD5300 Series, Expansion Enclosures
2/4 port 16Gb/32Gb FC 2 port 25Gb iSCSI
2 port 10GbE iSCSI RJ45
4 port 1GbE iSCSIC2F 256G BM/SC
4 port 10GbE iSCSI SFP+
12. RAID EE (RAID 2.0/Distributed RAID)
Up to 58% less time to rebuild RAID
Slightly better performance due to additional active drive in RAID Group
Spare – Empty Space
Parity – Rebuild Metadata
Data Block
Traditional RAID 5 RAID 5 EE
Note: Supported RAID 5 EE and RAID 6 EE
Empty blocks are skipped
P
7
4
1
10
P
5
2
11
8
P
3
12
9
6
P
S
S
S
S
13
10
P
S
1
14
P
S
4
2
P
S
7
5
3
S
11
8
6
P
15
12
9
P
S
Benefits
13. RAID EE (RAID 2.0/Distributed RAID)
Traditional RAID 5 RAID 5 EE
Note: Supported RAID 5 EE and RAID 6 EE
Empty blocks are skipped
Benefits
Up to 58% less time to rebuild RAID
Slightly better performance due to additional active drive in RAID Group
P
7
4
1
10
P
5
2
11
8
P
3
12
9
6
P
S
S
S
S
13
10
P
S
1
14
P
S
4
2
P
S
7
5
3
S
11
8
6
P
15
12
9
P
S
Failed Block
Parity – Rebuild Metadata
Data Block
14. RAID EE (RAID 2.0/Distributed RAID)
Benefits
13
10
P
S6
1
14
P
S8
4
2
P
S11
7
5
3
S
11
8
6
P
15
12
9
P
SP
RAID 5 EE
P
7
4
1
10
P
5
2
11
8
P
3
12
9
6
P
S12
S9
S6
SP
Traditional RAID 5
• The hot spare drives are inactive
• When a member drive fails, data is written to only one drive, which affects the IO performance
• RAID rebuild takes longer limited by the time taken to write an entire drive
• Data is distributed between all drives, including spares, which increases IO performance
• Upon a drive failure, data is written into the spare capacity on many drives, thus saving rebuild time
• Performance impact on external IO is minimized during the rebuild process
Rebuilt Block
Parity – Rebuild Metadata
Data Block
Up to 58% less time to rebuild RAID
Slightly better performance due to additional active drive in RAID Group
15. JetStor Online RAID Migration
Drives in current RAID Group
Drives to be added to RAID Group
Online RAID Expansion
• Add drives to SAN
• Let SAN rebuild RAID Group
XS5224D
25. Dual PSU
4x 10Gb iSCSI SFP+
4x PCIe 3x8 Slots
Up to 768GB RDIMM
XCubeFAS XF3126D
6-Core Xeon Bronze (12-Cores per Array)
4x FAN per Controller (8 Per System)
JetStor X Series Gen2, Product Line
3U 26-bay
XF3126D
XF3000-Series
(Intel® Xeon 6-Core, 16GB DDR4
RDIMM with Max 392GB Per Controller)
2/4 port 16G FC
Optional Host Cards
4 port 10GbE SFP+ 2 port 10GbE Base-T
4 port 1GbE Base-T2 port 32Gb FC 2 port 25Gb SFP28
26. QSLife
SSD Life Monitoring
QSRAID
SSD Data Distribution
QThin
Thin Provisioning
RAID EE
50% Faster RAID Rebuilds
QReplica
Remote Replication**
QSnap
16,000 Snapshots
XEVO 1.1
Live Demo
27. XEVO – AFA Management System
XEVO Main Dashboard
• 5 Minute Deployment
• Historical Performance Tracking
Note: Demo available at demo3.qsan.com
28. NAS
(Network Attached Storage)
SAN
(Storage Area Network)
• File level data
• Primary Media: ethernet
• I/O Protocol: NFS/CIFS/AFP/SMB
• NAS appears to OS as a shared folder
• Inexpensive
• Dependent on the LAN
• Requires no architectural changes
• Hardware hungry
• Block level data
• Primary Media: FC/iSCSI
• I/O Protocol: SCSI
• SAN appears to OS as an attached storage
• Expensive
• Independent of the LAN
• Requires architectural changes
• Thin software layer
29. VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware vSphere Production Cluster
SAN Switch SAN Switch
Production Storage
AFA
200+ VMs
VM VM VM VM VM VM
General Applications
Production Storage
2U12 NAS XN8012R
NAS
Shared Storage
SAN
Virtualization Storage
*Note: Both Applications can be implemented with SAN and NAS
10Gb Ethernet
16Gb FC
30. Switch 1
Clients, VMs
JetStor NXT: HA Block and File Storage
Unified Storage
Licensing: iSCSI, FC, CIFS, AFP, NFS, SMB all included
HA Ready, all services mirrored between 2 controllers
Heartbeat
Volume Replication
Server 1 + Cluster Software
+ File Storage
Server 2 + Cluster Software
+ File Storage
SAN Array (Block Storage)
Switch 1 Switch 2
Clients, VMs
Traditional HA Cluster
Licensing: paid protocols, storage features, etc.
HA: Must setup failover
HA Unified Storage
Switch 2
31. 4U 24-bay3U 16-bay2U 12-bay 2U 26-bay
XN8024D_8CXN8016D_8CXN8012D_8C XN8026D_8C
XN8000_8C-Series
(Intel® Xeon D-1537 8 Cores, 8GB to
128GB DDR4 Memory Per Controller)
JetStor NXT Product Line
2/4 port 16G FC
Optional Host Cards and Accessories:
4 port 10GbE SFP+
2 port 10GbE Base-T 4 port 1GbE Base-T
NIC Slot 1 NIC Slot 2
2x 12Gb SAS Expansion Ports2x 10GbE iSCSI (BASE-T) Ports
PSU 1 PSU 2Management Port
Built-in Flash Module
LFF SFF
D: Dual Controller
S: Single Upgradable
XN8024DXN8016DXN8012D XN8026D
XN8000-Series
(Intel® Xeon D-1527 4 Cores, 8GB to
128GB DDR4 Memory Per Controller)
2 port 32Gb FC 2 port 25Gb SFP28
32. 99.9999 Availability
Active – Active
Controller Design
Online Hardware Upgrades
PO/PI for RAM, Host Cards
Built-In Cache Protection
256GB wo SC or Battery
Redundant Everything
FAN, PSU, Drives
34. Auto Load Balancing
By creating more than 2 pools, the XCubeNXT will automatically allocate pool on two active controllers to
optimize the performance
USER
USER
1GbE port
500 MB
500 MB
USER
1GbE port USER
1GbE
1GbE
1GbE port
41. iSCSI throughput (MB/S) CIFS throughput(MB/S) NFS throughput(MB/S)
Seq. Read Seq. Write Seq. Read Seq. Write Seq. Read Seq. Write
2535 1472 4721 4417 4731 4719
Model: JetStor 8024D
Memory: 16G x 1 per controller
NIC: 10GbE SFP+ host card, 10G x 4
Drive: SAMSUNG 860 EVO 250GB *16
Pool1: RAID5, SSD x8, 100GB Volume/LUN
Pool2: RAID5, SSD x8, 100GB Volume/LUN
Memory cache protection : Off
SSD cache: No
In optimal performance test, 2 server I/O to Pool 1
from controller 1, another 2 servers I/O to Pool 2
from controller 2. Total 40Gb I/O to XN8024D.
Server
10GbE 10GbE 10GbE 10GbE
Switch
XN8024D
Pool 1 Pool 2
Performance (Throughput)
42. IOPS iSCSI 4K CIFS IOPS 4K NFS IOPS 4K
Random
read
Random
Write
Random
Read
Random
Write
Random
Read
Random
Write
109000 17400 83400 87500 80700 52200
Model: JetStor 8024D
Memory: 16G x 1 per controller
NIC: 10GbE SFP+ host card, 10G x 4
SAS HDD : SEAGATE ZBS054YM0000C709KL4V *20
SAS SSD : SEAGATE XS3200LE100003 *4
Pool1: RAID5, SSD x2, SAS HDD x10 100GB Volume/LUN
Pool2: RAID5, SSD x2, SAS HDD x10 100GB Volume/LUN
SSD cache: On
In optimal performance test, 2 server I/O to Pool 1 from
controller 1, another 2 servers I/O to Pool 2 from controller 2.
Total 40Gb I/O to XN8024D.
XN8024D
Server
10GbE 10GbE 10GbE 10GbE
Switch
Pool 1 Pool 2
x2
x10
Performance (IOPS)
43. Models
(XCubeSAN/XCub
eNXT/XCubeNAS)
WD Expansion
Enclosure
Max. No. of
Enclosure
Max. No. of
Disk Drives
Rack Space
(U)
Max. Raw
Capacity Capacity Density
(TB/U)(LFF 18TB,
SFF 15.36TB)
LFF 2U 12-bay
Data102
(4U 102-bay)
4 12 + 4 x 102 = 420 18 7.56 PB 420 TB/U
LFF 3U 16-bay
Data102
(4U 102-bay)
4 16 + 4 x 102 = 424 19 7.63 PB 402 TB/U
LFF 4U 24-bay
Data102
(4U 102-bay)
4 24 + 4 x 102 = 432 20 7.78 PB 389 TB/U
SFF 2U 26-bay
Data102
(4U 102-bay)
4 26 + 4 x 102 = 434 18 7.74 PB 430 TB/U
Capacity
Up to 7.78PB
JBOD Form Factors
4U60, 4U102
Scale-up with WD JBODs
44. QSM 3.3
ZFS File System
Silent Corruption Prevention
Multi-Backup Options
Rsync, Snapshot, XMirror,
Cloud Sync & Backup
Intuitive Load Balancing
Best Path Selection
XReplicator
Built-in Replication for 100
Devices
Inline Data Reduction
Compression /Dedup
WORM
Write Once Read Many
Data Retention
File/Folder Auto Removal
AES-256 Pool Encryption
and SED Support
Intuitive UI
Easy setup, management
45. ZFS Pool
Disk
File System ZVOL
(Block
device)
128 bit File System
ZFS EXT4 + LVM
128 bit 48 bit
Max Volume Size 16 Exabytes 1 Exabytes
Max Single File Size 16 Exabytes 16 Terabytes
ZFS – Futureproof Scale
46. ZFS – Data Integrity
▌ ZFS Checksum
Checksum for every read/write to
prevent data corruption
LBA
Checksum
Data
Memory
Data
LBA
Checksum
▌ Copy on Write – COW
Find free blocks to write new content.
No data corruption during a write operation
50. JetStor NXT Backup Options
Function File type Interval Suitable environment
Remote Backup
Snapshot
Replication
LUN & Shared
folder
Scheduled Media editing/VM(mount NFS)
Rsync Shared folder Scheduled
Office environment (ACL) &
Backup with other platform
XMirror
Volume & Shared
folder
Real Time School & Immediacy environment
Cloud Backup Cloud backup Shared folder Scheduled Amazon S3, S3 compatible
PC Backup XReplicator Shared folder
Manual &
Scheduled
Windows PC to XCubeNXT
Server & VM
Backup
Acronis Bare-metal Scheduled Optional license available
Veeam VM Scheduled
3rd party backup software in
Server
51. ZFS File System
Silent Corruption
Prevention
Multi-Backup Options
Rsync, Snapshot,
XMirror, Cloud & Backup
Intuitive Load Balancing
Best Path Selection
Scale-up
Scale up to 7PB with
JBOD solution
SmartSimple
Inline Data Reduction
Compression /Dedup
WORM
Write Once Read Many
Data Retention
File/Folder Auto Removal
AES-256 Pool Encryption
and SED Support
Secure
Intuitive UI
Easy setup, management
Active – Active
Controller Design
Online Hardware
Upgrades
RAM, Host Cards, CPU
Built-In Cache Protection
256GB wo SC or Battery
Always-on
JetStor NXT Summary
52. RE
2 Bay 4 Bay 8-12 Bay 16-24 Bay
XN8012R
XN8008R/T
XN5004R/T
XN3004R/T
XN8000-Series
Rack Models: Intel Kaby Lake Xeon E3,
Tower Models: Intel Kaby Lake i5
XN5000-Series
Rack Models: Intel Kaby Lake Celeron
Tower Models: Intel Kaby Lake Celeron
XN3000-Series
Tower Models: Intel Apollo Lake
Celeron
XN5012R
XN5008R/T
XN3002T
6
62
N Rack models with N-number of additional SSD bays
T All tower models are equipped with 1 additional SSD bay
XN7004R/T
XN7012R
XN7008R/T
2 6XN7000-Series
Rack Models: Intel Kaby Lake i3
Tower Models: i3 / Pentium Q-Core
* All Rack models come with redundant PSUs
RE
RE New 8-12 Bay Rack series without additional SSD Bays
JetStor NAS Product Line
XN7024R
XN7016R
XN8024R
XN8016R
60. NOW IT’S POSSIBLE
with WD
Capacity
Up to 5PB
Interfaces
12x Mini SAS Ports
Form Factors
4U60, 4U102
SSD Support
Up to 24 per JBOD
Integrated Storage Platform
XCubeSAN
WD JBOD
61. VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware vSphere Production Cluster VMware vSphere DR Cluster
SAN Switch SAN Switch
Primary Storage
VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
VMware ESXi Host
SAN Switch SAN Switch
200+ VMs
VM VM VM VM VM VM Test/Dev Test/Dev Test/Dev VM Replica VM Replica VM Replica
JetStor XEVO
1.4 PB Capacity in 4U
Synchronous Replication
at 2GB/s+
Veeam/Commvault/DataCore
Test and Development VMs working on
full flash VMFS storage provided by
JetStor
JetStor Customer Profile / VM Backup
WD Data60/Data102
High performance hybrid-flash SAN
with Auto-Tiering
62. JetStor Customer Profile / Powerful, Effective Surveillance Storage
XS1216D
NVR
NVR
NVR
NVR
NVR
NVR
SAN Switch SAN Switch
Over 11PB usable capacity
Network
Switch
Network
Switch
Over 1000 IP Cameras
WD Data60/Data102WD Data60/Data102
NVR
1 / 10 / 25 GbE / iSCSI
8 / 16 Gb FC
Milestone/ Axxone / Genetec / …
63. JetStor Customer Profile / All-Flash and Hybrid Media Storage
Data Distribution
Tiger Store / Xsan Server
Workstations
Network Switch FC Switch
1 / 10 / 25 GbE / iSCSI
8 / 16 Gb FC
Cold Data
JetStor Hybrid Array with WD Data102
GROWING WITH PARTNERS
DELIVERING UNIQUE PRODUCT VALUE
This is the overview of the XCube portfolio. AC&NC has very comprehensive product lines from AFA, SAN, Unified Storage, and NAS. Each product line shares the common hardware platform, but with different software implemented targeting different markets and applications. With this visionary design, AC&NC could focus on software innovations and also provide the tremendous value to our partners to ease their headache for services parts preparations.
In 2020, AC&NCis happy to announce unified storage XCubeNXT and NVMe AFA XCubeFAS added in the AC&NC portfolio, which significantly level up AC&NC’s positioning for enterprise segment with ultra high availability and extreme performance solution. We will start with AC&NC block storage solution in today’s presentation and then unified storage and NAS.
This is the full product line of SANs. All enclosures of SAN and DAS use unified hardware, Trays, PSU, Fans, C2F modules. Full redundancy design and extra flexibility for host connections. All systems shares the same service parts, which make service much simpler. Comparable even better than tier-1 OEMs’s offering
HA, Auto fail over and fall back.
Data protection/backup: replication snapshot. Synchronized replication will be announced in Jan. 2021
Storage efficiency: Thin provisioning and RAID EE
Performance boosting: SSD caching and Autotiering
VMware, Microsoft and Citrix certified
One of the key challenge is data is growing overtime. As drive capacity increased, how to keep the same SLA for data protection. RAID rebuild time should be addressed. RAID EE is AC&NC’s proprietary technology react to this trend. RAID EE plus fast rebuild technologies help to reduce the RAID rebuild time (performance downgrade and data loss during RAID rebuild with more HDDs failed).
Till now, We have trying to explain AC&NC’s solution is comparable with Tier-1 OEMs, but mots important will be AC&NC’s unique core competence with QSOE, and this is how AC&NC keeps its competitiveness in the market.
These benchmarks are … Consistent Read and Write Performance. QSOE which dedicate a CPU Core for packet processing.
These benchmarks are … Consistent Read and Write Performance. QSOE which dedicate a CPU Core for packet processing.
QSAN has recently introduced all flash arrays, and storagereview immediately took it for the testing. The article will be coming in November.
Major difference with hybrid storage is SSD performance optimization, with around 30% improvement in IOPS. Another key difference is QSLife, this feature creates controlled environment for SSD health management; it distributes data in a way that only 1 SSD can fail at the same time. Since QSAN is entering the industry later than other competitors they had time to collect major all-flash features from other vendors as well, for instance Snapshot recycle bin that can only be seen in Pure Storage.
Here I leveraged the All flash product line from one of the leading tier-1 OEMs. The IOPs shown here are 100% random write under ms latency. The reason I have chosen this product line is that all the performance benchmark can be found in public website either in 3rd-party lab or this OEM’s product page. This will help a lot to make head-to-head comparison regarding performance capability and performance-price ratio. The X axis is performance, and the Y axis is price. So we can easily fit 826FXD in this chart with 300K IOPS @0.5ms latency, and it’s fairly comparable with this OEM’s entry to mid range models. Moreover. With 812FXD. 67K IOPs @0.2ms or 450K@0.5ms, customers now have more affordable choice as alternative to their flagship models.
This is the other all flash product line from the same OEM, targeting at extremely high performance block storage at ultra low latency. With limited options, customer can only choose either entry model with performance compromised or pay for very costly high-ended models. It’s quite hard to make head-to-head comparison since EF series performs very well under 0.1ms but JetStor X Series offers perfect midrange solutions generally under 1ms range.
1. The next is Monday Loging Storm, You can see the green circled part of the chart in the lower left corner.
Every Monday morning, the members of the company will login VDI together, at this time there will be a short-term and large demand for random IOPs.
Therefore, all flash storage should be abled to cope with this situation, otherwise the simultaneous login will cause the system to slow down or even crash.
The chart on the right is also the result of testing in the storagereview.
826FXD can withstand up to 35k IOPs login requirements at 1ms latency.
The same calculation shows that XF3126D has 60-80K IOPs performance at 1ms
2. Next is the graph of C190. The performance is very unstable in the face of Monday’s login storm. At only 5K IOPs, the delay has exceeded 1ms.
3.
The last is suitable user scale. You can see the chart on the right. We use the test performance of 826FXD and XF3126D to calculate user scale.
Under ideal condition and with very powerful servers, the X series can carry more than 6,000 light users and more than 1,000 heavy users with 1ms and 0.5ms latency,.
We have actual customer case is using 826FXD with a powerful server is enough to carry 500 VDIs.
So we can infer that more than 500 VDIs are connected to XF3126D without any problem.
But the numbers on the table is very ideal situation, considering the problem of the VDI concentration on the same device, if there is a failure, there will be great losses.
It is not recommended to use more than 500 VDIs on only one all-flash storage.
Now let’s start with comparison between traditional NAS and SAN. You can easily differentiate NAS and SAN from all these technical parameters listed here, and supposedly each application should choose either NAS or SAN to meet their demand in the most simplified world. However, in the real world is more complicated.
These are very typical applications for NAS and SAN separately. But in the real world, almost enterprises run mixed workloads in their IT infrastructure, they are facing the challenges of growing data, emerging applications and of course the shrinking IT budget. According to IDC research, businesses are grappling with 50% data growth each year and organizations are managing an average of 9.7 PB of data in 2018 versus 1.45 PB in 2016. This increased pressure on IT has amplified complexity. 66% of IT decision makers say IT is more complex than it was just two years ago. In addition, with the COVID-19 pandemic causing postponed or reduced IT budgets, companies have no choice but turn to look at the enterprise-grade storage solution which is simple, versatile, future-ready and cost effective. In a word, they need storage solution which can do more with less.
Save space with less equipment, networking ports, cables, electrical power and overall cost!!
Now I will start with the dual active architecture and briefing the unique features, performance and key feature which are offered by XCubeNXT
Lets take a look at our high availability,
1st is our active-active architecture
When your business growing, service interruption is becoming unacceptable
Beside backup strategy, XCubeNXT bring a dual-active controllers design to guarantee business continuity.
In active-active design, both controller share the same disk pools, both controller can access to the same data and if there is one controller hardware failure, another will take over and continue all the service.
Meanwhile you can just plug out the failure controller and replace with a good one. The system will go back to normal and user will not feel the difference.
This is mentioned previously, often when you use this unit after period of time and facing the limit of performance, you have to upgrade your storage maybe in cache size, host connection, or even change to higher controller. In XCubeNXT, all these upgrades can be done without stop the data services.
You can use 2*SSDs with mirror to achieve the same of 3*SSD, save two SSD cost and space,
Closer to the theoretical up limit of 40Gb/s, and since iSCSI is emulated via file system, the overhead causes the number is lower and we are working on improvement continuously
High capacity hard drives achieve more than 7PB solution. We have several success stories in Europe which is used for large scale backup or archiving.
The basic ZFS unit is a pool that is created using multiple disks, and from the pool we can create both file system and block device.
The special part of the ZFS is that it have its own volume manager, it has the ability to create software RAID by itself.
And it is designed as 128bit file system, lets compare to EXT4 48-bit
there is almost no practical limit to a single file size, directory entries, and the pool size, that means no matter how big your data is, you can still put it in volume or folder without Facing the risk of data damage caused by split files.
For example if you use EXT4 and you have 18TB file, you have to spilt into two part because the max single file size is 16TB in EXT4. But in ZFS you wont have this kind of problem. Practically, you if check the competitor’s SMB NAS solution, you can find they claim the limit of 250TB or 200TB as maximum capacity per volume/folder.
With XCubeNXT and scale-up with WD’s JBOD up to 7PB, there is no limit in both hardware wise and software wise
And then lets talk about data integrity and start with silent data corruption.
Imaging this important X-Ray picture above, this is what data corruption or silent corruption happened. A data corruption is data occur error during transmission or processing, but the application is not aware, and not recoverable. The results could range from a minor loss of data to a system crash
But ZFS is very reliable and have very strong design to prevent data corruption.
In ZFS, all data are checksummed and copy-on-write, and when a data is read, ZFS will calculate the checksum and compare with the original checksum, if not match, it automatically repairs the damage, using data from the other mirror.
Regarding COW, when you have a data to write, ZFS will find free blocks to write new data. Only after write finished, will ZFS re-index to use the new blocks. If the storage encounters power loss or other accidents during a write operation -> No data corruption, since the data structure has not changed yet
Since every data read and write is traceable, one significant benefits of ZFS is faster RAID rebuilt, and it only rebuild the minimum amount of necessary data without rebuilding the block which has no data stored. This feature coupled with no size limit on single volume or folder become more important as drives capacity now grow up to 18TB per drive.
*deduplication technology bases on ZFS and provides inline, block-level function that checks the block similarity of data as it enters the system.
Deduplication will automatically remove the redundant data object to reduce the usage of storage capacity.
XCubeNXT and XCubeNAS are the must secured storage solution in the market now. *XCubeNXT pool encryption mechanism ensures secure storage environment for any user who wants to modify data must first pass authentication. They support up to AES 256-bit encryption for whole pool and adopts a military level FIPS 140-2 validated encryption, which is considered to be the highest security certification for compliance.
*WORM technology is design to prevent intentional or accidental modification of data in a certain period. Files and folders under WORM’s protection can only be read in a user-defined period, it’s unable to modify any files or folders until the period has expired. WORM it can protect your data from encryption-based ransomware that installs covertly on a victim’s system and encrypt their files, making them inaccessible. WORM will protect your confidential data from unauthorized modification and threat, ensuring the correctness and integrity of your data.
*Third, with the disk self-encrypting technology called SED (Self-Encrypting Drive, even if the physical drive is stolen or misplaced, the data on it remains protected against data breach by generating the authentication key (AK) to prevent the unauthorized access. However, the SED offered by AC&NC provides the secured pools migration between different XCubeNXT and easy to manage the key by exporting AK for more efficient.
This page contains XCubeNAS backup solutions. (Embedded and Support)
Remote Backup: Snapshot Replication, Rsync, and XMirror.
Cloud Backup: Cloud Sync and Cloud Backup
PC Backup: Xreplicator powered by Acronics True Image
Server Backup: Additional license for Acronics backup options
-Lets take a look of the matrix, from the file type, Snapshot supports LUN and shared folder.
And the Xmirror can support volume and shared folder, Rsync and Cloud Backup can only backup shared folder level.
As you know one volume might include several shared folders. And Xmirror can backup whole volume.
-Second is the schedule type. The snapshot replica and Rsync only supports regular schedule.
And Xmirror only support instant type. For cloud backup, it supports tasks by schedule, but Cloud sync can support schedule or real time backup.
-For the suitable environment, we put some environments for you to consider how to suggest the customer.
Snapshot will suit for Media streaming and VM with NFS mount which is the big structure environment.
Remote backup suit for the office environment which backup structure is small, can fit the customer’s schedule backup and won’t cause the scan time issue as we keep mentioning. If the customer need the backup between different platform, then it is also suggested to use remote backup.
For the Mirror, we would suggest to use at school structure or other environment require instant backup. Please reminder Xmirror will have the same limitation with remote backup about the scan time case.
Cloud Sync suit for the user like soho environment, and need to share the data between different devices
And if the customer environment had existed the cloud service which we support, then we would suggest to use cloud backup.
-Lets check the Tech Tips
This is a recap for this webinar, please feel free to ask any question
AC&NC has a wide range of NAS products. Starting from 2 bay to 24 bay models. Since company comes from Enterprise ODM market, the hardware quality is flawless.