HugeGraph的备份与恢复

描述

Backup 和 Restore 是备份图和恢复图的功能。备份和恢复的数据包括元数据(schema)和图数据(vertex 和 edge)。

Backup

将 HugeGraph 系统中的一张图的元数据和图数据以 JSON 格式导出。

Restore

将 Backup 导出的JSON格式的数据,重新导入到 HugeGraph 系统中的一个图中。

Restore 有两种模式:

  • Restoring 模式,将 Backup 导出的元数据和图数据原封不动的恢复到 HugeGraph 系统中。可用于图的备份和恢复,一般目标图是新图(没有元数据和图数据)。比如:
    • 系统升级,先备份图,然后升级系统,最后将图恢复到新的系统中
    • 图迁移,从一个 HugeGraph 系统中,使用 Backup 功能将图导出,然后使用 Restore 功能将图导入另一个 HugeGraph 系统中
  • Merging 模式,将 Backup 导出的元数据和图数据导入到另一个已经存在元数据或者图数据的图中,过程中元数据的 ID 可能发生改变,顶点和边的 ID 也会发生相应变化。
    • 可用于合并图

API

Backup

Backup 使用元数据和图数据的相应的Get RESTful API 导出,并未增加新的 API。

Restore

Restore 存在两种不同的模式: Restoring 和 Merging。另外,还有非 Restore 模式,区别如下:

  • None 模式,元数据和图数据的写入属于正常状态,可参见功能说明。特别的:
    • 元数据(schema)创建时不允许指定 ID
    • 图数据(vertex)在 id strategy 为 Automatic 时,不允许指定 ID
  • Restoring 模式,恢复到一个新图中,特别的:
    • 元数据(schema)创建时允许指定 ID
    • 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID
  • Merging 模式,合并到一个已存在元数据和图数据的图中,特别的:
    • 元数据(schema)创建时不允许指定 ID
    • 图数据(vertex)在 id strategy 为 Automatic 时,允许指定 ID

正常情况下,图模式为 None,当需要 Restore 图时,需要根据需要临时修改图模式为 Restoring 模式或者 Merging 模式,并在完成 Restore 时,恢复图模式为 None。

设置图模式的 RESTful API 如下:

http://localhost:8080/graphs/hugegraph/mode

使用方法

可以使用hugegraph-tools进行图的备份和恢复。

Backup
bin/hugegraph backup -t all -d data

该命令将 http://127.0.0.1 的 hugegraph 图的全部元数据和图数据备份到data目录下。

Backup 在三种图模式下都可以正常工作

Restore

Restore 有两种模式: RESTORING 和 MERGING,备份之前首先要根据需要设置图模式。

步骤1:查看并设置图模式
bin/hugegraph graph-mode-get

该命令用于查看当前图模式,包括:NONE、RESTORING、MERGING。

bin/hugegraph graph-mode-set -m RESTORING

该命令用于设置图模式,Restore 之前可以设置成 RESTORING 或者 MERGING 模式,例子中设置成 RESTORING。

步骤2:Restore 数据
bin/hugegraph restore -t all -d data

该命令将data目录下的全部元数据和图数据重新导入到 http://127.0.0.1 的 hugegraph 图中。

步骤3:恢复图模式
bin/hugegraph graph-mode-set -m NONE

该命令用于恢复图模式为 NONE。

至此,一次完整的图备份和图恢复流程结束。

帮助

备份和恢复命令的详细使用方式可以参考hugegraph-tools文档

部分设计细节

1. server端

为了支持backup/restore功能,需要对图库的功能进行扩展:

  1. schema的id默认是自动生成的,即不能够通过指定id的方式创建schema,但是对于restore来说,必须原样把schema恢复到图中,需要指定schema的id
  2. 对于自动生成id策略的顶点,同样是不能够以指定id的方式创建的

因此,为图增加不同的模式:

public enum GraphMode {

    /*
     * None mode is regular mode
     * 1. Not allowed create schema with specified id
     * 2. Not support create vertex with id for AUTOMATIC id strategy
     */
    NONE(1, "none"),

    /*
     * Restoring mode is used to restore schema and graph data to an new graph.
     * 1. Support create schema with specified id
     * 2. Support create vertex with id for AUTOMATIC id strategy
     */
    RESTORING(2, "restoring"),

    /*
     * MERGING mode is used to merge schema and graph data to an existing graph.
     * 1. Not allowed create schema with specified id
     * 2. Support create vertex with id for AUTOMATIC id strategy
     */
    MERGING(3, "merging");
    ......

在创建schema时,需要根据graph的mode,进行判断是否允许指定schema的id,包括:

  • VertexLabel
  • EdgeLable
  • IndexLabel
  • PropertyKey
    @Override
    public VertexLabel build() {
        Id id = this.transaction.validOrGenerateId(HugeType.VERTEX_LABEL,
                                                   this.id, this.name);
                                                   
        ......
    }
    public Id validOrGenerateId(HugeType type, Id id, String name) {
        boolean forSystem = Graph.Hidden.isHidden(name);
        if (id != null) {
            this.checkIdAndUpdateNextId(type, id, name, forSystem);
        } else {
            if (forSystem) {
                id = this.getNextSystemId();
            } else {
                id = this.getNextId(type);
            }
        }
        return id;
    }

    private void checkIdAndUpdateNextId(HugeType type, Id id,
                                        String name, boolean forSystem) {
        if (forSystem) {
            if (id.number() && id.asLong() < 0) {
                return;
            }
            throw new IllegalStateException(String.format(
                      "Invalid system id '%s'", id));
        }
        HugeGraph graph = this.graph();
        E.checkState(id.number() && id.asLong() > 0L,
                     "Schema id must be number and >0, but got '%s'", id);
        E.checkState(graph.mode() == GraphMode.RESTORING,
                     "Can't build schema with provided id '%s' " +
                     "when graph '%s' in mode '%s'",
                     id, graph, graph.mode());
        this.setNextIdLowest(type, id.asLong());
    }

同样的,在创建vertex的时候,也需要综合判断vertex的VertexLabel和当前的graph mode进行断定vertex是否允许传递id以及id的值是否合法:

    @Watched(prefix = "graph")
    public HugeVertex constructVertex(boolean verifyVL, Object... keyValues) {
        HugeElement.ElementKeys elemKeys = HugeElement.classifyKeys(keyValues);

        VertexLabel vertexLabel = this.checkVertexLabel(elemKeys.label(),
                                                        verifyVL);
        Id id = HugeVertex.getIdValue(elemKeys.id());
        List<Id> keys = this.graph().mapPkName2Id(elemKeys.keys());

        // Check whether id match with id strategy
        this.checkId(id, keys, vertexLabel);

        // Check whether passed all non-null property
        this.checkNonnullProperty(keys, vertexLabel);

        // Create HugeVertex
        HugeVertex vertex = new HugeVertex(this, null, vertexLabel);

        // Set properties
        ElementHelper.attachProperties(vertex, keyValues);

        // Assign vertex id
        if (this.graph().mode().maintaining() &&
            vertexLabel.idStrategy() == IdStrategy.AUTOMATIC) {
            // Resume id for AUTOMATIC id strategy in restoring mode
            vertex.assignId(id, true);
        } else {
            vertex.assignId(id);
        }

        return vertex;
    }
    
    public boolean maintaining() {
        return this == RESTORING || this == MERGING;
    }
2. hugegraph-tools端
backup实现细节
  1. 通过Shard RESTful API接口,获取顶点或者边的Shard信息
    • Shard是按照一定的大小将顶点或者边的主键进行分割,返回起始主键的值和结束主键的值
    • 只有顶点和边有shard接口,schema没有shard接口
      • schema的数据量远小于顶点和边,直接使用get API获取即可
  2. 通过多线程程序使用SCAN RESTful API接口分别获取每一个Shard中的顶点或者边
    • 多线程加速,提升备份效率
    • 重试机制,避免因网络偶发故障导致失败
    • 超时可设置,避免因超时导致部分数据丢失
    • 特殊问题
      • Cassandra后端
        • 有的shard数据量过大,超过80w,会导致错误,解决办法是对于这类异常进行再一次划分shard
        • Cassandra的shard区间是”环形“的long值空间,即从0x0000000000000000-0xffffffffffffffff且是环路的,用long值表示,会由于补码问题导致切分的问题
      • RocksDB或者Hbase后端
        • RocksDB的shard是通过划分主键的前4个字节,即将0x00000000-0xffffffff进行划分,因此也会出现大于80w的问题
        • 前4字节并空间较小,导致聚合效应,即一个主键的值,对应着大于80w的问题
      • 通用问题:当确实单个顶点有超过80w的边的时候,必然出现大于80w的问题,需要通过shard支持paging的方式解决
  • 失败重试代码
    public <R> R retry(Supplier<R> supplier, String description) {
        int retries = 0;
        R r = null;
        do {
            try {
                r = supplier.get();
            } catch (Exception e) {
                if (retries == this.retry) {
                    throw new ClientException(
                              "Exception occurred while %s(after %s retries)",
                              e, description, this.retry);
                }
                // Ignore exception and retry
                continue;
            }
            break;
        } while (retries++ < this.retry);
        return r;
    }
  • shard重新划分
    private static List<Shard> splitShard(Shard shard) {
        List<Shard> shards = new ArrayList<>(MAX_SPLIT_COUNT);
        long start = Long.valueOf(shard.start());
        long end = Long.valueOf(shard.end());
        boolean boundary = false;
        if (start > 0 && end < 0) {
            if (start != Long.MAX_VALUE) {
                shards.add(new Shard(shard.start(),
                                     String.valueOf(Long.MAX_VALUE), 0));
            } else {
                boundary = true;
            }
            if (end != Long.MIN_VALUE) {
                shards.add(new Shard(String.valueOf(Long.MIN_VALUE),
                                     shard.end(), 0));
            } else {
                boundary = true;
            }
            if (boundary) {
                shards.add(new Shard(String.valueOf(Long.MAX_VALUE),
                                     String.valueOf(Long.MIN_VALUE), 0));
            }
            return shards;
        } else {
            return splitShardEven(shard);
        }
    }

    private static List<Shard> splitShardEven(Shard shard) {
        List<Shard> shards = new ArrayList<>(MAX_SPLIT_COUNT);
        long start = Long.valueOf(shard.start());
        long end = Long.valueOf(shard.end());

        long step = (end - start) / MAX_SPLIT_COUNT;
        if (step == 0L) {
            step = 1L;
        }
        long currentLow = start;
        do {
            long currentHigh = currentLow + step;
            if (currentHigh > end) {
                currentHigh = end;
            }
            shards.add(new Shard(String.valueOf(currentLow),
                                 String.valueOf(currentHigh), 0));
            currentLow = currentHigh;
        } while (currentLow < end);
        return shards;
    }

  • schema backup
    protected void backupPropertyKeys(String filename) {
        List<PropertyKey> pks = this.client.schema().getPropertyKeys();
        this.propertyKeyCounter.getAndAdd(pks.size());
        this.write(filename, HugeType.PROPERTY_KEY, pks);
    }
  • vertex/edge backup
    protected void backupVertices(String prefix) {
        long splitSize = this.splitSize();
        List<Shard> shards = retry(() ->
                             this.client.traverser().vertexShards(splitSize),
                             "querying shards of vertices");
        this.writeShards(prefix + ALL_SHARDS, shards);
        int i = 0;
        for (Shard shard : shards) {
            String file = prefix + (i++ % threadsNum());
            this.backupVertexShard(file, shard);
        }
        this.awaitTasks();
        this.postProcessFailedShard(HugeType.VERTEX, prefix);
    }
    
    private void backupVertexShard(String file, Shard shard, boolean first) {
        String desc = String.format("backing up vertices[shard:%s]", shard);
        List<Vertex> vertices = null;
        try {
            vertices = retry(() -> this.client.traverser().vertices(shard),
                             desc);
        } catch (ClientException e) {
            if (first) {
                this.exceptionHandler(e, HugeType.VERTEX, shard);
            } else {
                throw e;
            }
        }
        if (vertices == null || vertices.isEmpty()) {
            return;
        }
        this.vertexCounter.getAndAdd(vertices.size());
        this.writeZip(file, HugeType.VERTEX, vertices);
    }
  • 超过80w的处理流程
    private void processLimitExceedShardAsync(
                 Shard shard, String prefix, HugeType type,
                 TriConsumer<String, Shard, Boolean> consumer) {
        if (!this.needToHandleShard(shard)) {
            Exception e = new ClientException("Single value limit exceed");
            this.logExceptionWithShard(e, type, shard);
            return;
        }
        int i = 0;
        for (Shard s : splitShard(shard)) {
            if (!this.needToHandleShard(s)) {
                Exception e = new ClientException("Single value limit exceed");
                this.logExceptionWithShard(e, type, shard);
                return;
            }
            String file = prefix + (i++ % threadsNum());
            this.submit(() -> {
                try {
                    this.processLimitExceedShard(s, file, type, consumer);
                } catch (Throwable e) {
                    this.logExceptionWithShard(e, type, s);
                }
            });
        }
    }

    private void processLimitExceedShard(
                 Shard shard, String file, HugeType type,
                 TriConsumer<String, Shard, Boolean> consumer) {
        try {
            consumer.accept(file, shard, false);
        } catch (ClientException e) {
            if (isLimitExceedException(e)) {
                String prefix = prefix(type, file);
                this.processLimitExceedShardAsync(shard, prefix,
                                                  type, consumer);
            } else {
                this.logExceptionWithShard(e, type, shard);
            }
        }
    }
restore实现细节
  • schema恢复
  1. merging模式时,schema是使用已有的schema,而非原样创建,因此需要:
    • 设置checkExisted为false
    • reset schema的id
  2. restoring模式,schema必须保持与原来一致,因此需要:
    • 检查是否已经存在,即设置checkExisted为true
    • 允许指定id的方式创建schema
  3. 单条插入的方式进行恢复
    private void restorePropertyKeys(HugeType type, String dir) {
        String fileName = type.string();
        BiConsumer<String, String> consumer = (t, l) -> {
            for (PropertyKey pk : this.readList(t, PropertyKey.class, l)) {
                if (this.mode == GraphMode.MERGING) {
                    pk.resetId();
                    pk.checkExist(false);
                }
                this.client.schema().addPropertyKey(pk);
                this.propertyKeyCounter.getAndIncrement();
            }
        };
        this.restore(type, Paths.get(dir, fileName).toFile(), consumer);
    }
  • 顶点恢复
  1. 需要把primarykey类型的顶点的id清空
    • primary key的对于MERGING模式,VertexLabel的id可能发生了变化,所以让vertex不带id,自动生成一遍即可
    • customize指定id,无需更新
    • automatic在restore或者merging模式下变成了指定id,无需更新
  2. 批量500进行插入
  3. 多线程加速
  4. retry机制保证成功
    private void restoreVertices(HugeType type, String dir) {
        this.initPrimaryKeyVLs();
        String filePrefix = type.string();
        List<File> files = dataFiles(dir, filePrefix);
        printRestoreFiles(type, files);
        BiConsumer<String, String> consumer = (t, l) -> {
            List<Vertex> vertices = this.readList(t, Vertex.class, l);
            int size = vertices.size();
            for (int i = 0; i < size; i += BATCH_SIZE) {
                int toIndex = Math.min(i + BATCH_SIZE, size);
                List<Vertex> subVertices = vertices.subList(i, toIndex);
                for (Vertex vertex : subVertices) {
                    if (this.primaryKeyVLs.containsKey(vertex.label())) {
                        vertex.id(null);
                    }
                }
                this.retry(() -> this.client.graph().addVertices(subVertices),
                           "restoring vertices");
                this.vertexCounter.getAndAdd(toIndex - i);
            }
        };
        for (File file : files) {
            this.submit(() -> {
                this.restoreZip(type, file, consumer);
            });
        }
        this.awaitTasks();
    }
  • 边恢复
  1. 边的id没有策略,默认都是采用顶点id和label拼接而成,当顶点id出现变化时,需要按需更新
    • customize和automatic的顶点,id都不会发生变化,无需更新
    • primary key的顶点,id可能发生变化,需要更新为系统中当前的VertexLabel的id
  2. 批量500进行插入
  3. 多线程加速
  4. retry机制保证成功
  5. 不检查顶点是否存在
    private void restoreEdges(HugeType type, String dir) {
        this.initPrimaryKeyVLs();
        String filePrefix = type.string();
        List<File> files = dataFiles(dir, filePrefix);
        printRestoreFiles(type, files);
        BiConsumer<String, String> consumer = (t, l) -> {
            List<Edge> edges = this.readList(t, Edge.class, l);
            int size = edges.size();
            for (int i = 0; i < size; i += BATCH_SIZE) {
                int toIndex = Math.min(i + BATCH_SIZE, size);
                List<Edge> subEdges = edges.subList(i, toIndex);
                /*
                 * Edge id is concat using source and target vertex id and
                 * vertices of primary key id strategy might have changed
                 * their id
                 */
                this.updateVertexIdInEdge(subEdges);
                this.retry(() -> this.client.graph().addEdges(subEdges, false),
                           "restoring edges");
                this.edgeCounter.getAndAdd(toIndex - i);
            }
        };
        for (File file : files) {
            this.submit(() -> {
                this.restoreZip(type, file, consumer);
            });
        }
        this.awaitTasks();
    }
    
    private void updateVertexIdInEdge(List<Edge> edges) {
        for (Edge edge : edges) {
            edge.source(this.updateVid(edge.sourceLabel(), edge.source()));
            edge.target(this.updateVid(edge.targetLabel(), edge.target()));
        }
    }

    private Object updateVid(String label, Object id) {
        if (this.primaryKeyVLs.containsKey(label)) {
            String sid = (String) id;
            return this.primaryKeyVLs.get(label) +
                   sid.substring(sid.indexOf(':'));
        }
        return id;
    }

backup/restore的数据源或者目的地

目前支持本地文件系统+Zip压缩

正在开发对HDFS的支持

  • backup
    private void writeFile(String file, Object type, List<?> list) {
        Lock lock = locks.lock(file);
        try (ByteArrayOutputStream baos = new ByteArrayOutputStream(LBUF_SIZE);
             FileOutputStream fos = new FileOutputStream(file, true)) {
            String key = String.format("{\"%s\": ", type);
            baos.write(key.getBytes(API.CHARSET));
            this.client.mapper().writeValue(baos, list);
            baos.write("}\n".getBytes(API.CHARSET));
            fos.write(baos.toByteArray());
        } catch (Exception e) {
            Printer.print("Failed to serialize %s: %s", type, e);
        } finally {
            lock.unlock();
        }
    }

    private void writeZip(String file, HugeType type, List<?> list) {
        this.writeZip(file, type.string(), list);
    }

    private void writeZip(String file, String type, List<?> list) {
        Lock lock = locks.lock(file);
        ByteArrayOutputStream baos = new ByteArrayOutputStream(LBUF_SIZE);
        FileOutputStream fos;
        ZipOutputStream zos;
        if (this.files.get(file) == null) {
            try {
                fos = new FileOutputStream(file + ".zip", true);
                zos = new ZipOutputStream(fos);
                ZipEntry entry = new ZipEntry(file);
                zos.putNextEntry(entry);
                this.files.putIfAbsent(file, zos);
            } catch (IOException e) {
                Printer.print("Failed to backup file '%s'", file);
                System.exit(-1);
            }
        }
        zos = this.files.get(file);
        try {
            String key = String.format("{\"%s\": ", type);
            baos.write(key.getBytes(API.CHARSET));
            this.client.mapper().writeValue(baos, list);
            baos.write("}\n".getBytes(API.CHARSET));
            zos.write(baos.toByteArray());
        } catch (Exception e) {
            Printer.print("Failed to serialize %s: %s", type, e);
        } finally {
            lock.unlock();
        }
    }

  • restore
    private void restore(HugeType type, File file,
                         BiConsumer<String, String> consumer) {
        E.checkArgument(
                file.exists() && file.isFile() && file.canRead(),
                "Need to specify a readable filter file rather than: %s",
                file.toString());
        try (InputStream is = new FileInputStream(file);
             InputStreamReader isr = new InputStreamReader(is, API.CHARSET);
             BufferedReader reader = new BufferedReader(isr)) {
            String line;
            while ((line = reader.readLine()) != null) {
                consumer.accept(type.string(), line);
            }
        } catch (IOException e) {
            throw new ClientException("IOException occur while reading %s",
                                      e, file.getName());
        }
    }

    private void restoreZip(HugeType type, File file,
                            BiConsumer<String, String> consumer) {
        E.checkArgument(
                file.exists() && file.isFile() && file.canRead(),
                "Need to specify a readable filter file rather than: %s",
                file.toString());
        E.checkArgument(file.getAbsolutePath().endsWith(".zip"),
                        "'%s' files must be zip archive, but got '%s'",
                        type, file);

        String charset = API.CHARSET;
        try (InputStream is = new FileInputStream(file);
             ZipInputStream zis = new ZipInputStream(is)) {
            while (zis.getNextEntry() != null) {
                InputStreamReader isr = new InputStreamReader(zis, charset);
                BufferedReader reader = new BufferedReader(isr);
                String line;
                while ((line = reader.readLine()) != null) {
                    consumer.accept(type.string(), line);
                }
            }
        } catch (IOException e) {
            throw new ClientException("IOException occur while reading %s",
                                      e, file.getName());
        }
    }

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值