aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Rollforward of cl/211656888 after fixing failing unit test.Gravatar Mark Heffernan2018-09-05
| | | | | | | | | | | *** Original change description *** Add HloSchedule class representing a sequential order of an HloModule. Currently we represent a sequential schedule of a module using a SequentialHloOrdering::HloModuleSequence which is a type alias of a bare map from HloComputation* to std::vector<HloInstruction*>. This CL replaces this with a proper class which results in better encap... *** PiperOrigin-RevId: 211726890
* This CL changes the graph-mode API of the learning_rate_decay functions in ↵Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | TF 2.0 to return a no-arg callable to output a learning rate, instead of directly outputting a learning rate tensor. This brings the graph mode API in line with the eager execution API, where this change was made to allow changing the learning rate value across different invocations of optimizer functions. PiperOrigin-RevId: 211726295
* [Keras / Cloud TPU]: Correct indexing for software pipelining.Gravatar Brennan Saeta2018-09-05
| | | | PiperOrigin-RevId: 211724843
* Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211722113
* Upload floating point mobilenet-v2 and resnet-v2-101 models.Gravatar Raghuraman Krishnamoorthi2018-09-05
| | | | | | Also upload fully quantized mobilenet-v2 and inception-v3 models. PiperOrigin-RevId: 211721504
* Propagate eager output tensor types in TFLiteGravatar Jared Duke2018-09-05
| | | | PiperOrigin-RevId: 211721354
* Fix lite_test.py.Gravatar Nupur Garg2018-09-05
| | | | PiperOrigin-RevId: 211719399
* disable msan in failing testGravatar Olivia Nordquist2018-09-05
| | | | PiperOrigin-RevId: 211719342
* Re-added proto field for dynamic learning rate support (not usable yet).Gravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211719009
* Implements TPU alltoall op.Gravatar Youlong Cheng2018-09-05
| | | | | | RELNOTES: n/a PiperOrigin-RevId: 211718248
* [tf.data] Surface errors correctly in MapDefunOp by using different ↵Gravatar Rachel Lim2018-09-05
| | | | | | CancellationManagers for each run of the function. PiperOrigin-RevId: 211717580
* Fix several build warnings in TFLiteGravatar Jared Duke2018-09-05
| | | | PiperOrigin-RevId: 211715608
* Mark tf.GraphKeys.VARIABLES as deprecatedGravatar A. Unique TensorFlower2018-09-05
| | | | PiperOrigin-RevId: 211714574
* Temporarily disable distributed coordinator training when using TPUStrategyGravatar Frank Chen2018-09-05
| | | | PiperOrigin-RevId: 211712907
* Update diagram in TOCO README.Gravatar Nupur Garg2018-09-05
| | | | PiperOrigin-RevId: 211711493
* Expose an axis argument for VocabInfo, which allows for warm-starting of the ↵Gravatar Eddie Zhou2018-09-05
| | | | | | second axis of Tensors through tf.train.warm_start. Note that the underlying initializer already has this functionality (for example, for output layers). PiperOrigin-RevId: 211709879
* Deprecate `tf.ReaderBase` and related APIs.Gravatar Derek Murray2018-09-05
| | | | | | These APIs are based on queue runners, which have been deprecated and will be removed in TensorFlow 2.0. They have been replaced with `tf.data.Dataset`, which provides a more efficient version of the same functionality. PiperOrigin-RevId: 211708268
* Fold CapturingGraph into FuncGraph.Gravatar Skye Wanderman-Milne2018-09-05
| | | | | | | | | There's no need for the two separate classes anymore. This also cleans up some other parts of the interface: * Removes the clear_resource_control_flow_state, which isn't used anywhere * Makes capture_value a private method of FuncGraph (_capture_helper) * Makes create_substitute_placeholder private PiperOrigin-RevId: 211707906
* Remove logging which generates tons of logs for large model.Gravatar Jianwei Xie2018-09-05
| | | | PiperOrigin-RevId: 211707155
* [tf.data] Minor fix to remove unnecessary difference between the ↵Gravatar Jiri Simsa2018-09-05
| | | | | | implementations of the batch and padded batch reducers. PiperOrigin-RevId: 211706766
* Merge pull request #21993 from perfinion:jpegGravatar TensorFlower Gardener2018-09-05
|\ | | | | | | PiperOrigin-RevId: 211706322
* | Experimental work-in-progress support for TPUStrategy in keras.Gravatar Priya Gupta2018-09-05
| | | | | | | | PiperOrigin-RevId: 211705274
* | Merge pull request #22046 from yongtang:08202018-conv1d-docGravatar TensorFlower Gardener2018-09-05
|\ \ | | | | | | | | | PiperOrigin-RevId: 211705018
* | | Support converting eager tensor to tf.float16 if a numpy half is passed.Gravatar Akshay Modi2018-09-05
| | | | | | | | | | | | | | | | | | This still defaults to float32 for all normal floats. PiperOrigin-RevId: 211704918
* | | Skip quantization of optional tensors (tensor_idx = -1)Gravatar Suharsh Sivakumar2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211703281
* | | Special-case the AccumulateNV2 op in print_selective_registration_headerGravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | AccumulateNV2 doesn't have or need a kernel. It gets rewritten to other ops by accumulate_n_optimizer.cc. This change allows it to be mentioned in the output of print_selective_registration_header, rather than being ignored with a warning. Behavior for other ops is preserved. PiperOrigin-RevId: 211701878
* | | Deprecate `tf.train.batch()` and related APIs.Gravatar Derek Murray2018-09-05
| | | | | | | | | | | | | | | | | | These APIs are based on queue runners, which have been deprecated and will be removed in TensorFlow 2.0. They have been replaced with `tf.data.Dataset`, which provides a more efficient version of the same functionality. PiperOrigin-RevId: 211700442
* | | Merge pull request #21716 from artsobolev:pfor-softplus-fixGravatar TensorFlower Gardener2018-09-05
|\ \ \ | | | | | | | | | | | | PiperOrigin-RevId: 211700362
* | | | Correct gradient for multi-output tfe.py_funcGravatar Alexandre Passos2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211698400
* | | | [XLA] Rename PrecisionConfigProto to PrecisionConfigGravatar David Majnemer2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | The "Proto" suffix adds little clarity but makes a long type name even longer. PiperOrigin-RevId: 211693871
* | | | [XLA] Make tensorflow/compiler use absl::{StrCat,string_view,InlinedVector} ↵Gravatar Benjamin Kramer2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | consistently StringPiece is an alias for absl::string_view, InlinedVector is aliased to absl::InlinedVector. StrCat is compatible, so swapping it out is safe. PiperOrigin-RevId: 211691840
* | | | Change tags for estimator_testGravatar Austin Anderson2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211688974
* | | | [tf.data]: Fix internal comment.Gravatar Brennan Saeta2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211687433
* | | | BEGIN_PUBLICGravatar Mark Heffernan2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | Automated rollback of commit 7fa693209fe238478739b3982f652a7e35be91f3 PiperOrigin-RevId: 211681957
* | | | Make TFLite NNAPI delegate friendlier to application code. Esp. allows runningGravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | benchmark on O-MR1 without an exit() of the process. Also fixes bug in interpretation of error values (NNAPI vs. TFLite error codes). PiperOrigin-RevId: 211681942
* | | | Make logging less verboseGravatar Sanjoy Das2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | I want --vmodule=xla_compilation_cache=1 to print only the most essential things. PiperOrigin-RevId: 211676846
* | | | Internal Change.Gravatar Michael Case2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211666438
* | | | Internal change.Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211665268
* | | | Allow gradients() calls from inside a tfe.defun wrt captured tensors.Gravatar Skye Wanderman-Milne2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This modifies https://github.com/tensorflow/tensorflow/commit/834da2c3fddab1bbbce742db572cfe65dd320fcd to work with tfe.defun in addition to the legacy Defun implementation. PiperOrigin-RevId: 211663702
* | | | [TF:XLA] Define DefaultPrecisionConfig in HloTestBase and delete multiple ↵Gravatar Dimitris Vardoulakis2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | duplicate definitions. PiperOrigin-RevId: 211662523
* | | | libc++ fix: make comparison functors constGravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211661670
* | | | test_util.py: Allow use_gpu to change between calls to self.cached_session()Gravatar Asim Shankar2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | use_gpu does not affect the creation of the session, it only affects the context manager in which nodes are added to the graph, so it should not be included in the consistency check. PiperOrigin-RevId: 211659833
* | | | Introduce auxiliary input and allow "cross-linking" in the bidirectional ↵Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LSTM Op. This introduces a connection between forward and backward cells across subsequent layers when stacking bidirectional LSTM Ops on top of each other. In more detail: Previously, the Op had only one input that was fed into the layer in the following way: INPUT (INPUT_REVERSED) | | ----------------------- | FW_LSTM BW_LSTM | <----- bidi-LSTM cell (with one input / two outputs) ----------------------- | | FW_OUT BW_OUT Now, the Op can have an (optional) auxiliary input in the following way: AUX_INPUT (AUX_INPUT_REVERSED) | | INPUT | (INPUT_R'D.)| | | | | ------------------------- | \ / \ / | | FW_LSTM BW_LSTM | <----- bidi-LSMT cell (with 2 inputs / 2 outputs) ------------------------- | | FW_OUT BW_OUT When stacking these Ops, previously, only the following flow was allowed: Input / \ FW_LSTM1 BW_LSTM1 | | | | FW_LSTM2 BW_LSTM2 | | | | FW_LSTM3 BW_LSTM3 \ / Output With the introduction of an auxiliary input to the bidi-LSTM layer, the forward (FW_LSTMi) output of the ith layer is fed into as the input to the next layer (hence, inputs to both FW_LSTM{i+1} and BW_LSTM{i+1}) and the backward output is fed as the auxiliary inputs to both FW_LSTM{i+1} and BW_LSTM{i+1}). This way, the stacking can be changed to allow for the "cross-linking" between subsequent layer in the following way: Input / \ FW_LSTM1 BW_LSTM1 | \ / | | / \ | FW_LSTM2 BW_LSTM2 | \ / | | / \ | FW_LSTM3 BW_LSTM3 \ / Output PiperOrigin-RevId: 211659472
* | | | Add HloSchedule class representing a sequential order of an HloModule.Gravatar Mark Heffernan2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we represent a sequential schedule of a module using a SequentialHloOrdering::HloModuleSequence which is a type alias of a bare map from HloComputation* to std::vector<HloInstruction*>. This CL replaces this with a proper class which results in better encapsulation of code which deals with schedules and better enforcement of invariants. This CL also fixes a corner-case bug in dataflow analysis, where values of instructions which are live out of the computation erroneously did not interfere with the values of instructions scheduled after the root instruction. PiperOrigin-RevId: 211656888
* | | | Optimize CuboidConvolutionBwdInput.Gravatar Eugene Zhulenev2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ~25-30% speedup when compiled with AVX. * collapse inner dims before contraction * eval kernel tensor before contraction PiperOrigin-RevId: 211651030
* | | | Set CUDA_VISIBLE_DEVICES='' tfcompile and tfcompile tests' genrules.Gravatar Justin Lebar2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This prevents these build-time rules from accessing any GPUs which might be present on the build machine and interfering with GPU tests which might be running concurrently. PiperOrigin-RevId: 211647681
* | | | [XLA] Give "big" and "small" params different colors in hlo_graph_dumper.Gravatar Justin Lebar2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211643209
* | | | Fix categorical feature handler accumulator to use high precision 64 bit ↵Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | accumulator. PiperOrigin-RevId: 211642436
* | | | Alias tensorflow::gtl::InlinedVector to absl::InlinedVectorGravatar Benjamin Kramer2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211639440
* | | | Exclude icf=all from TFLite linker options on iOS.Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211637019