aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Merge pull request #22046 from yongtang:08202018-conv1d-docGravatar TensorFlower Gardener2018-09-05
|\ | | | | | | PiperOrigin-RevId: 211705018
* | Support converting eager tensor to tf.float16 if a numpy half is passed.Gravatar Akshay Modi2018-09-05
| | | | | | | | | | | | This still defaults to float32 for all normal floats. PiperOrigin-RevId: 211704918
* | Skip quantization of optional tensors (tensor_idx = -1)Gravatar Suharsh Sivakumar2018-09-05
| | | | | | | | PiperOrigin-RevId: 211703281
* | Special-case the AccumulateNV2 op in print_selective_registration_headerGravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | AccumulateNV2 doesn't have or need a kernel. It gets rewritten to other ops by accumulate_n_optimizer.cc. This change allows it to be mentioned in the output of print_selective_registration_header, rather than being ignored with a warning. Behavior for other ops is preserved. PiperOrigin-RevId: 211701878
* | Deprecate `tf.train.batch()` and related APIs.Gravatar Derek Murray2018-09-05
| | | | | | | | | | | | These APIs are based on queue runners, which have been deprecated and will be removed in TensorFlow 2.0. They have been replaced with `tf.data.Dataset`, which provides a more efficient version of the same functionality. PiperOrigin-RevId: 211700442
* | Merge pull request #21716 from artsobolev:pfor-softplus-fixGravatar TensorFlower Gardener2018-09-05
|\ \ | | | | | | | | | PiperOrigin-RevId: 211700362
* | | Correct gradient for multi-output tfe.py_funcGravatar Alexandre Passos2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211698400
* | | [XLA] Rename PrecisionConfigProto to PrecisionConfigGravatar David Majnemer2018-09-05
| | | | | | | | | | | | | | | | | | The "Proto" suffix adds little clarity but makes a long type name even longer. PiperOrigin-RevId: 211693871
* | | [XLA] Make tensorflow/compiler use absl::{StrCat,string_view,InlinedVector} ↵Gravatar Benjamin Kramer2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | consistently StringPiece is an alias for absl::string_view, InlinedVector is aliased to absl::InlinedVector. StrCat is compatible, so swapping it out is safe. PiperOrigin-RevId: 211691840
* | | Change tags for estimator_testGravatar Austin Anderson2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211688974
* | | [tf.data]: Fix internal comment.Gravatar Brennan Saeta2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211687433
* | | BEGIN_PUBLICGravatar Mark Heffernan2018-09-05
| | | | | | | | | | | | | | | | | | Automated rollback of commit 7fa693209fe238478739b3982f652a7e35be91f3 PiperOrigin-RevId: 211681957
* | | Make TFLite NNAPI delegate friendlier to application code. Esp. allows runningGravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | benchmark on O-MR1 without an exit() of the process. Also fixes bug in interpretation of error values (NNAPI vs. TFLite error codes). PiperOrigin-RevId: 211681942
* | | Make logging less verboseGravatar Sanjoy Das2018-09-05
| | | | | | | | | | | | | | | | | | | | | I want --vmodule=xla_compilation_cache=1 to print only the most essential things. PiperOrigin-RevId: 211676846
* | | Internal Change.Gravatar Michael Case2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211666438
* | | Internal change.Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211665268
* | | Allow gradients() calls from inside a tfe.defun wrt captured tensors.Gravatar Skye Wanderman-Milne2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | This modifies https://github.com/tensorflow/tensorflow/commit/834da2c3fddab1bbbce742db572cfe65dd320fcd to work with tfe.defun in addition to the legacy Defun implementation. PiperOrigin-RevId: 211663702
* | | [TF:XLA] Define DefaultPrecisionConfig in HloTestBase and delete multiple ↵Gravatar Dimitris Vardoulakis2018-09-05
| | | | | | | | | | | | | | | | | | duplicate definitions. PiperOrigin-RevId: 211662523
* | | libc++ fix: make comparison functors constGravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211661670
* | | test_util.py: Allow use_gpu to change between calls to self.cached_session()Gravatar Asim Shankar2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | use_gpu does not affect the creation of the session, it only affects the context manager in which nodes are added to the graph, so it should not be included in the consistency check. PiperOrigin-RevId: 211659833
* | | Introduce auxiliary input and allow "cross-linking" in the bidirectional ↵Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LSTM Op. This introduces a connection between forward and backward cells across subsequent layers when stacking bidirectional LSTM Ops on top of each other. In more detail: Previously, the Op had only one input that was fed into the layer in the following way: INPUT (INPUT_REVERSED) | | ----------------------- | FW_LSTM BW_LSTM | <----- bidi-LSTM cell (with one input / two outputs) ----------------------- | | FW_OUT BW_OUT Now, the Op can have an (optional) auxiliary input in the following way: AUX_INPUT (AUX_INPUT_REVERSED) | | INPUT | (INPUT_R'D.)| | | | | ------------------------- | \ / \ / | | FW_LSTM BW_LSTM | <----- bidi-LSMT cell (with 2 inputs / 2 outputs) ------------------------- | | FW_OUT BW_OUT When stacking these Ops, previously, only the following flow was allowed: Input / \ FW_LSTM1 BW_LSTM1 | | | | FW_LSTM2 BW_LSTM2 | | | | FW_LSTM3 BW_LSTM3 \ / Output With the introduction of an auxiliary input to the bidi-LSTM layer, the forward (FW_LSTMi) output of the ith layer is fed into as the input to the next layer (hence, inputs to both FW_LSTM{i+1} and BW_LSTM{i+1}) and the backward output is fed as the auxiliary inputs to both FW_LSTM{i+1} and BW_LSTM{i+1}). This way, the stacking can be changed to allow for the "cross-linking" between subsequent layer in the following way: Input / \ FW_LSTM1 BW_LSTM1 | \ / | | / \ | FW_LSTM2 BW_LSTM2 | \ / | | / \ | FW_LSTM3 BW_LSTM3 \ / Output PiperOrigin-RevId: 211659472
* | | Add HloSchedule class representing a sequential order of an HloModule.Gravatar Mark Heffernan2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | Currently we represent a sequential schedule of a module using a SequentialHloOrdering::HloModuleSequence which is a type alias of a bare map from HloComputation* to std::vector<HloInstruction*>. This CL replaces this with a proper class which results in better encapsulation of code which deals with schedules and better enforcement of invariants. This CL also fixes a corner-case bug in dataflow analysis, where values of instructions which are live out of the computation erroneously did not interfere with the values of instructions scheduled after the root instruction. PiperOrigin-RevId: 211656888
* | | Optimize CuboidConvolutionBwdInput.Gravatar Eugene Zhulenev2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | ~25-30% speedup when compiled with AVX. * collapse inner dims before contraction * eval kernel tensor before contraction PiperOrigin-RevId: 211651030
* | | Set CUDA_VISIBLE_DEVICES='' tfcompile and tfcompile tests' genrules.Gravatar Justin Lebar2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | This prevents these build-time rules from accessing any GPUs which might be present on the build machine and interfering with GPU tests which might be running concurrently. PiperOrigin-RevId: 211647681
* | | [XLA] Give "big" and "small" params different colors in hlo_graph_dumper.Gravatar Justin Lebar2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211643209
* | | Fix categorical feature handler accumulator to use high precision 64 bit ↵Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | | | | | | | accumulator. PiperOrigin-RevId: 211642436
* | | Alias tensorflow::gtl::InlinedVector to absl::InlinedVectorGravatar Benjamin Kramer2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211639440
* | | Exclude icf=all from TFLite linker options on iOS.Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211637019
* | | Convert more kernel signatures to use runtime shapes.Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211633744
* | | utils cleanup: move the builtins module under operators.Gravatar Dan Moldovan2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211631516
* | | Simplify analysis in funcitonalize_cond by splitting CondState.Gravatar Jacques Pienaar2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Split CondState into CondState (which corresponds to scope previously) and AncestorState (which tracks which switch/merge nodes are an ancestor of a ndoe). Previously CondState tracked both but that resulted in difficult to follow meet rules. Instead by splitting these out the meet for merge and non-merge are straight forward set operations. The ancestor relation is similarly easy to compute along with CondState computation. * Enhance the redundant switch checking: previously we only considered the predicates but %s=switch(val=%P, pred=switch(%P_1, %P):then) is also redundant as if %P is true then %s:else is dead. * Enhance in-edge testing to insert a switch if a value from an outer context is consumed inside an inner context. * Rename CondStateMap to StateMap to match new usage. PiperOrigin-RevId: 211622021
* | | Minimum change for generating Eager ops with Toco.Gravatar Yu-Cheng Ling2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211621189
* | | compat: Update forward compatibility horizon to 2018-09-05Gravatar A. Unique TensorFlower2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211598349
* | | Update `make_tensor_proto` docs to reference public symbol for `make_ndarray`.Gravatar Tom Hennigan2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211592901
* | | Add support for grouped convolutions to the HloEvaluator.Gravatar Adrian Kuegel2018-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add a missing check to InferConvolveShape(), the output feature dimension needs to be divisible by feature_group_count. Also fix some tests which took a const reference to the return value of a function which doesn't return a reference. PiperOrigin-RevId: 211592011
* | | [XLA] Add some ReduceWindow tests, and make them more robust.Gravatar Michael Kuperstein2018-09-05
| | | | | | | | | | | | PiperOrigin-RevId: 211588937
* | | PR #21187: Added a normalization term to ctc_beam_search_decoder for tfliteGravatar A. Unique TensorFlower2018-09-04
| | | | | | | | | | | | PiperOrigin-RevId: 211586062
* | | Merge pull request #22074 from lc0:eager_saveGravatar TensorFlower Gardener2018-09-04
|\ \ \ | | | | | | | | | | | | PiperOrigin-RevId: 211584024
* | | | Automated rollback of commit 8cf8afefdb4c240f74a05e24246c8cd2dcce9d54Gravatar Michael Case2018-09-04
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 211581486
* | | | Merge pull request #21951 from minggli:doc/sessionGravatar TensorFlower Gardener2018-09-04
|\ \ \ \ | | | | | | | | | | | | | | | PiperOrigin-RevId: 211581348
* | | | | Allow configuring session options in keras when running with distribution ↵Gravatar Priya Gupta2018-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | strategy. PiperOrigin-RevId: 211576839
* | | | | In TPUStrategy.configure, copy cluster spec from cluster resolver so that ↵Gravatar Priya Gupta2018-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the user doesn't have to pass it again to session_config. PiperOrigin-RevId: 211576564
* | | | | Make minimum num elements of quantizable weights tensor configurable.Gravatar Suharsh Sivakumar2018-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also minor fix of enabling quantization of shared weights if hybrid evaluation is true. PiperOrigin-RevId: 211573947
* | | | | Set session_config.isolate_session_state to True for all strategies except ↵Gravatar Priya Gupta2018-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Parameter server strategy where variables are shared across sessions. PiperOrigin-RevId: 211573447
* | | | | Relu1 custom op.Gravatar Alan Chiao2018-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is implemented as custom op instead of builtin op because Relu1 is not supported in Tensorflow and not commonly used. PiperOrigin-RevId: 211571619
* | | | | Clone the model in fit instead of compile for distribution strategy in keras.Gravatar Priya Gupta2018-09-04
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 211570665
* | | | | Move iterator.get_next() to be called inside fit from inside of standardize ↵Gravatar Sourabh Bajaj2018-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | function. PiperOrigin-RevId: 211564198
* | | | | Test cleanupsGravatar Asim Shankar2018-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Remove unnecessary use of test_session() in tests that run with eager execution enabled. - Use cached_session() instead of test_session() (self.test_session() has been deprecated in 9962eb5e84b15e309410071b06c2ed2d6148ed44 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about: * the fact that the session may be reused. * the session is not closed even when doing a "with self.test_session()" statement.) PiperOrigin-RevId: 211562969
* | | | | Hardcode input range from output for reluGravatar A. Unique TensorFlower2018-09-04
| | | | | | | | | | | | | | | | | | | | PiperOrigin-RevId: 211562900
* | | | | [XLA] Don't show trivial feature_group_count attributesGravatar David Majnemer2018-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the feature_group_count is 1, don't bother showing it as it is not very informative and a very common scenario. This is consistent with the HloCustomCall's feature_group_count attribute. PiperOrigin-RevId: 211560372