aboutsummaryrefslogtreecommitdiffhomepage
Commit message (Collapse)AuthorAge
* Fix links in the TensorFlow Security AdvisoriesGravatar Frank Chen2018-05-31
| | | | PiperOrigin-RevId: 198762795
* Making the tf.name_scope blocks related to the factor and weight vars ↵Gravatar A. Unique TensorFlower2018-05-31
| | | | | | configurable. By default they will not be scoped. PiperOrigin-RevId: 198759754
* RuntimeShapes class: minor tweak to fix builds.Gravatar A. Unique TensorFlower2018-05-31
| | | | PiperOrigin-RevId: 198755870
* [XLA] Make HloInstruction::backend_config() a JSON-encoded protobuf.Gravatar Justin Lebar2018-05-31
| | | | PiperOrigin-RevId: 198754463
* [XLA] Redesign: delete computation_tracker and user_computation.Gravatar A. Unique TensorFlower2018-05-31
| | | | PiperOrigin-RevId: 198743117
* Initial implementation of a few of the list-specific operators.Gravatar Dan Moldovan2018-05-31
| | | | | | This introduces an abstraction for a dispatch context, which allows passing local information through to the specialized operators. PiperOrigin-RevId: 198742074
* Suppress generation of the proto API's descriptor() method, it conflicts ↵Gravatar A. Unique TensorFlower2018-05-31
| | | | | | with the field name. PiperOrigin-RevId: 198739727
* Introduce runtime shape class.Gravatar A. Unique TensorFlower2018-05-31
| | | | PiperOrigin-RevId: 198739017
* Another handle_data fix for graph-mode functions.Gravatar Skye Wanderman-Milne2018-05-31
| | | | PiperOrigin-RevId: 198734229
* Cleanup: update continue_statements.py to use the base transformer ↵Gravatar Dan Moldovan2018-05-31
| | | | | | facilities for tracking local state and reindenting node blocks. Rearrange the error handling in base transformer to avoid chained exceptions. PiperOrigin-RevId: 198727946
* Standardize shifts in multiplication util functions.Gravatar A. Unique TensorFlower2018-05-31
| | | | PiperOrigin-RevId: 198725578
* Make GraphConstructor create nodes in the same order as the GraphDef.Gravatar Skye Wanderman-Milne2018-05-31
| | | | | | | | | While technically the order of the created nodes doesn't matter, this makes viewing and debugging graphs more sensible. Fixes #19594. PiperOrigin-RevId: 198721173
* [tf.data] Scaling down the `batch_dataset_op_test`.Gravatar Jiri Simsa2018-05-31
| | | | PiperOrigin-RevId: 198715407
* implementation of sparse_to_denseGravatar A. Unique TensorFlower2018-05-31
| | | | PiperOrigin-RevId: 198710452
* [XLA] Redesign: delete the old service interface.Gravatar A. Unique TensorFlower2018-05-30
| | | | | | | | | | | | | | | | | | - Computation - ComputeConstant - Execute - ExecuteAsync - ExecuteParallel - GetComputationStats - GetComputationShape - GetLocalShape - IsConstant - LoadComputationSnapshot - Op - SetReturnValue - SnapshotComputation PiperOrigin-RevId: 198669035
* Improve ReshapeIsIdentity to work with symbolic shapes.Gravatar Jingyue Wu2018-05-30
| | | | | | | | | | For example, with this CL, ArithmeticOptimizer can optimize the Reshape below into a no-op. s = Shape(t) Reshape(t, Concat(s[0], s[1], s[2], s[3])) PiperOrigin-RevId: 198668726
* Add GCS_READ_CACHE_DISABLED explicit env var to GcsFileSystemGravatar Nick Felt2018-05-30
| | | | PiperOrigin-RevId: 198658074
* Add a subclassed Model's attribute-assigned variables to Model.weights et alGravatar Allen Lavoie2018-05-30
| | | | | | | | | | Makes the Variable.trainable property public, which is sensible if we're discouraging use of the global collection (currently eager execution is using ResourceVariable._trainable in a bunch of places anyway). I'm leaving it read-only for now, since we should toggle in and out of the global collection when it changes. Same change for checkpointable data structures with respect to gathering extra variables. They'll behave like subclassed Models. I think this makes more sense than trying to have a distinction between "variables" and "weights". It's also more sensible than collecting everything that would get checkpointed, since that will include Optimizer slot variables and metrics. Collecting those is generally pointless, and accidentally adding them to gradient tapes would be horribly confusing. PiperOrigin-RevId: 198656079
* Automated g4 rollback of changelist 195379693Gravatar HyoukJoong Lee2018-05-30
| | | | PiperOrigin-RevId: 198654780
* Expose xla_disable_hlo_passes via ExecutableBuildOptions.Gravatar Sanjoy Das2018-05-30
| | | | PiperOrigin-RevId: 198654099
* Remove code returning bad status when the input pointer is nullptr in internalGravatar Ruoxin Sang2018-05-30
| | | | | | | functions. That should be a programmatic error and we have full control of internal functions, so it is OK to crash if error happens. PiperOrigin-RevId: 198651749
* [XLA] Add parsers for Window and ConvolutionDimensionNumbers.Gravatar Justin Lebar2018-05-30
| | | | | | | Also modify relevant ToString functions so we can have the property Parse(ToString(x)) == x. PiperOrigin-RevId: 198650340
* Enable TOCO pip command line binding.Gravatar Nupur Garg2018-05-30
| | | | PiperOrigin-RevId: 198649827
* Fix bug with renorm + virtual_batch_size.Gravatar Chris Ying2018-05-30
| | | | PiperOrigin-RevId: 198648273
* [XLA] Switch replay_computation to use LocalClient.Gravatar Justin Lebar2018-05-30
| | | | | | | | | | | | | | | This lets replay_computation build an executable once and run it multiple times. This is particularly important because in XLA:GPU, the first run of an executable does some autotuning and therefore is unrepresentative. This change removes --xla_hlo_profile_last_run, because I don't see how to support it in LocalClient -- LocalClient wants the do-profile bit to be set when we *compile*. (There may not be an easy fix for this; it worked with regular Client because we were recompiling every time we ran.) PiperOrigin-RevId: 198643577
* [TF:XLA] Bump open source llvm revision to r333547Gravatar Sanjoy Das2018-05-30
| | | | PiperOrigin-RevId: 198642698
* Regard a path as a directory if it ends with "/" in GCS. This implies the ↵Gravatar Ruoxin Sang2018-05-30
| | | | | | assumption that if a real GCS object has file name ending with "/", it is always a directory mark rather than an object carrying actual contents. PiperOrigin-RevId: 198640604
* Always delete old while loop after LICMGravatar Sanjoy Das2018-05-30
| | | | | | | Right now the old while loop can stick around if it had side effects, which is incorrect. PiperOrigin-RevId: 198639203
* Add SerialDeviceBatchScheduler which offers similar performance as the ↵Gravatar A. Unique TensorFlower2018-05-30
| | | | | | | | | | | | AdaptiveSharedBatchScheduler, but increased reliablility and stability. ASBS assumes request latency can be minimized at a specific number of batch processing threads. Under reasonable load, this is true and ASBS performs well, but under low load latency is basically unaffected by the number of threads, and ASBS can learn a wide variety of 'optimal' values. If load resumes suddenly, these values can give very poor latencies. In most cases, ASBS will recover, eventually rediscovering the correct value, but we have observed other cases where the latency is so large and noisy that ASBS can't get a good signal to guide its learning and the number of threads remains stuck at the bad value. In addition, the incremental learning nature of this algorithm means that ASBS is always exploring to some extent, which can give rise to periods of non-optimal latency. This is most significant at high utilization where the wrong number of threads can potentially overload the system. ASBS uses latency as a proxy for keeping the tensorflow processing pipeline optimally loaded. SDBS, on the other hand, uses a direct measurement of the pipeline fullness, and adjusts its number of batch processing threads accordingly. This solves the exploration problem. SDBS solves the low load problem by not adjusting its thread count when the threads pass some idleness threshold. PiperOrigin-RevId: 198638918
* Add a convenience function, build_supervised_input_receiver_fn_from_input_fn,Gravatar Karmel Allison2018-05-30
| | | | | | that takes an Estimator input_fn and returns an input receiver function. PiperOrigin-RevId: 198638593
* Automated g4 rollback of changelist 198444757Gravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198637528
* Makes empty() support uint8 on cpu.Gravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198634986
* Remove environment variable to disable C API.Gravatar Skye Wanderman-Milne2018-05-30
| | | | | | This is staging for removing the _USE_C_API toggle altogether. PiperOrigin-RevId: 198634886
* Makes most variable writes depend on the cached value.Gravatar Alexandre Passos2018-05-30
| | | | | | This disallows some undefined behavior with unordered reads and writes. PiperOrigin-RevId: 198633444
* Add HloProto support to replay_computationGravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198631733
* Avoid recursion in ExpandDomain() as stack is not happy.Gravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198629366
* Add kwargs support for tpu.outside_compilationGravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198625799
* Move RemoveInvolution optimization to optimizer stage.Gravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198624394
* Add GCS configure ops.Gravatar Brennan Saeta2018-05-30
| | | | PiperOrigin-RevId: 198624285
* Add `fill_triangular_inverse`, which flattens a triangular matrix in a way ↵Gravatar Joshua V. Dillon2018-05-30
| | | | | | | | | | | such that: # Lower triangular matrix x = tf.matrix_band_part(x, -1, 0) x == fill_triangular(fill_triangular_inverse(x)) Code by srvasude@ which I'm submitting on his behalf. PiperOrigin-RevId: 198623887
* Add control dependencies to the correct graph when simplifying packing ops.Gravatar Benoit Steiner2018-05-30
| | | | PiperOrigin-RevId: 198622727
* Add `tf.contrib.distributions.bijectors.MatrixInverseTriL`: Bijector that ↵Gravatar A. Unique TensorFlower2018-05-30
| | | | | | inverts a lower-triangular matrix. PiperOrigin-RevId: 198622553
* Add include file which provides the proper std::string mapping.Gravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198620715
* Skip errors in function optimizer if optimized graph was not modified before ↵Gravatar A. Unique TensorFlower2018-05-30
| | | | | | | | error happened. Currently error can happen if function can't be instantiated as GrapplerFunctionItem. PiperOrigin-RevId: 198595096
* [tf.data] change batch dataset op test size to large to prevent timeoutGravatar Jiri Simsa2018-05-30
| | | | PiperOrigin-RevId: 198592202
* Let the swig wrapped builder to return the HloModuleProto.Gravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198588920
* Add an option to propagate Status in GraphOptimizerStagePipelines.Gravatar Rob Sloan2018-05-30
| | | | PiperOrigin-RevId: 198585886
* Internal changeGravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198582954
* Disable flaky fused_rnn_cell_testGravatar Gunhan Gulsoy2018-05-30
| | | | PiperOrigin-RevId: 198582181
* KL divergence for two Dirichlet distributions.Gravatar A. Unique TensorFlower2018-05-30
| | | | PiperOrigin-RevId: 198573236