TensorFlow与其他开源框架的集成

网友投稿 607 2022-10-22

TensorFlow与其他开源框架的集成

TensorFlow与其他开源框架的集成

TensorFlow Ecosystem

This repository contains examples for integrating TensorFlow with other open-source frameworks. The examples are minimal and intended for use as templates. Users can tailor the templates for their own use-cases.

If you have any additions or improvements, please create an issue or pull request.

Contents

docker - Docker configuration for running TensorFlow on cluster managers.kubeflow - A Kubernetes native platform for ML A K8s custom resource for running distributed TensorFlow jobsJupyter images for different versions of TensorFlowTFServing Docker images and K8s templates kubernetes - Templates for running distributed TensorFlow on Kubernetes.marathon - Templates for running distributed TensorFlow using Marathon, deployed on top of Mesos.hadoop - TFRecord file InputFormat/OutputFormat for Hadoop MapReduce and Spark.spark-tensorflow-connector - Spark TensorFlow Connectorspark-tensorflow-distributor - python package that helps users do distributed training with TensorFlow on their Spark clusters.

Distributed TensorFlow

See the Distributed TensorFlow documentation for a description of how it works. The examples in this repository focus on the most common form of distributed training: between-graph replication with asynchronous updates.

Common Setup for distributed training

Every distributed training program has some common setup. First, define flags so that the worker knows about other workers and knows what role it plays in distributed training:

# Flags for configuring the taskflags.DEFINE_integer("task_index", None,                     "Worker task index, should be >= 0. task_index=0 is "                     "the master worker task the performs the variable "                     "initialization.")flags.DEFINE_string("ps_hosts", None,                    "Comma-separated list of hostname:port pairs")flags.DEFINE_string("worker_hosts", None,                    "Comma-separated list of hostname:port pairs")flags.DEFINE_string("job_name", None, "job name: worker or ps")

Then, start your server. Since worker and parameter servers (ps jobs) usually share a common program, parameter servers should stop at this point and so they are joined with the server.

# Construct the cluster and start the serverps_spec = FLAGS.ps_hosts.split(",")worker_spec = FLAGS.worker_hosts.split(",")cluster = tf.train.ClusterSpec({    "ps": ps_spec,    "worker": worker_spec})server = tf.train.Server(    cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_index)if FLAGS.job_name == "ps":  server.join()

Afterwards, your code varies depending on the form of distributed training you intend on doing. The most common form is between-graph replication.

Between-graph Replication

You must explicitly set the device before graph construction for this mode of training. The following code snippet from the Distributed TensorFlow tutorial demonstrates the setup:

with tf.device(tf.train.replica_device_setter( worker_device="/job:worker/task:%d" % FLAGS.task_index, cluster=cluster)): # Construct the TensorFlow graph.# Run the TensorFlow graph.

Requirements To Run the Examples

To run our examples, Jinja templates must be installed:

# On Ubuntusudo apt-get install python-jinja2# On most other platformssudo pip install Jinja2

Jinja is used for template expansion. There are other framework-specific requirements, please refer to the README page of each framework.

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:codeforces 385C Bear and Prime Numbers
下一篇:BJ 集训测试1 Problem B 汉诺塔
相关文章

 发表评论

暂时没有评论,来抢沙发吧~