aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/core
Commit message (Collapse)AuthorAge
* Add support for modeling fast memory close to the processor/gpuGravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216453979
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216452496
* Add 'remove' operation to MutableHashTable and MutableDenseHashTable.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216443201
* [tf.data vectorization] Add vectorizer for `Add` opGravatar Rachel Lim2018-10-09
| | | | PiperOrigin-RevId: 216424512
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216410913
* Add RaggedTensors to tf.core. Moving the RaggedGather op kernel.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216400726
* [tf.data] NUMA-aware MapAndBatch dataset.Gravatar Brennan Saeta2018-10-09
| | | | PiperOrigin-RevId: 216395709
* Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216392772
* [tf.data vectorization] Handle captured inputs in MapVectorization optimizationGravatar Rachel Lim2018-10-09
| | | | PiperOrigin-RevId: 216381943
* Create SDCAOptimizerV2 op to fix the "adaptative" typo.Gravatar Yuefeng Zhou2018-10-09
| | | | PiperOrigin-RevId: 216370193
* Change LOG(WARNING) to VLOG(1) in utilsGravatar Peter Ma2018-10-09
| | | | PiperOrigin-RevId: 216369081
* Removed unused load statements from the core BUILD.Gravatar A. Unique TensorFlower2018-10-09
| | | | PiperOrigin-RevId: 216354906
* Automated rollback of commit 5f308cb408eb46ec9af0546be6b9ae1d5166b185Gravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216309111
* Refactor CalculateOutputSize() from VirtualScheduler protected member ↵Gravatar Peter Ma2018-10-08
| | | | | | function to utils; Refactor EstimateSize() from memory_optimizer.cc to utils; some small changes for readability improvement PiperOrigin-RevId: 216307257
* Automated rollback of commit 07df147ab20c4a5329148e5fb5f7f6b187cb73a4Gravatar Reed Wanderman-Milne2018-10-08
| | | | PiperOrigin-RevId: 216299809
* Add a tracing::ScopedActivity event to track the duration of a Session::Run()Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | call for better xprof tracing. Also annotate synchronous op execution with the session-run id (or step_id) as metadata leveraging the support introduced in cl/215985561. This should enable highlighting the duration of a Session::Run and all the ops that ran in it for visualizing latency regressions in the case of CPU inference. PiperOrigin-RevId: 216284682
* Register int64 SUM GPU kernel.Gravatar James Qin2018-10-08
| | | | PiperOrigin-RevId: 216280913
* Fix the seeding for `Dataset.shuffle(..., reshuffle_each_iteration=False)`.Gravatar Derek Murray2018-10-08
| | | | | | | | | Previously, we were passing the first (graph-level) seed for both the graph-level and op-level seeds when creating a C++ dataset. This change passes the op-level seed to the appropriate point, and adds a test for the behavior with graph-but-not-op-level seeds. PiperOrigin-RevId: 216280641
* Consolidate device parameter arguments into a shared DeviceInfo structGravatar A. Unique TensorFlower2018-10-08
| | | | PiperOrigin-RevId: 216280197
* Remove deprecations for some of the endpoints in ApiDef files. These changesGravatar Anna R2018-10-08
| | | | | | | | | | | | | | | are made according to https://github.com/tensorflow/community/pull/16. I am keeping a few symbols deprecated not mentioned in the doc: tf.diag - it seems best to keep it next to tf.linalg.diag, so that the two are easy to compare and decide which one to use. The plan is to rename tf.diag to tf.tensor_diag. tf.is_nan - similar to tf.is_inf, tf.is_finite, tf.is_numeric_tensor which are all getting deprecated and replaced by symbols in tf.debugging. tf.string_to_number - other string endpoints in root namespace are getting deprecated: for e.g. tf.substr, tf.string_join. tf.dequantize - all quantization ops should be under tf.quantize. I probably missed this one. tf.check_numerics - similar to other debugging ops that are getting moved to tf.debugging. tf.squared_difference - moved to tf.math namespace and not as popular as some other math ops such as tf.add to justify keeping endpoint in root. tf.decode_raw - similar to other ops such as tf.decode_csv that are getting moved to tf.io.decode_csv. PiperOrigin-RevId: 216278010
* Automated rollback of commit 13b47e6c4f9d7b295948b1057139bf676e394b6fGravatar Derek Murray2018-10-08
| | | | PiperOrigin-RevId: 216260575
* Automated rollback of commit 09b0fc199129e0f487a39741bdf674cf09035cbcGravatar Derek Murray2018-10-08
| | | | PiperOrigin-RevId: 216256115
* Merge pull request #22783 from Intel-tensorflow:sfu2/clean_mklmlGravatar TensorFlower Gardener2018-10-08
|\ | | | | | | PiperOrigin-RevId: 216253115
* | [tf.data] Choose non-deterministic seed once per Python-level `Dataset` object.Gravatar Derek Murray2018-10-08
| | | | | | | | | | | | | | | | | | | | This changes the behavior of randomness-introducing datasets (`tf.data.Dataset.shuffle()`, `tf.data.experimental.shuffle_and_repeat()`, and `tf.data.experimental.RandomDataset`). Previously, when you used the same `tf.data.Dataset` object multiple times in a pipeline (e.g. by zipping two datasets derived from the same randomness-introducing dataset) *and* you did not specify an explicit `seed`, the implementation would choose different non-deterministic seeds for each use of the `Dataset` object. With this change, the seed will be chosen once per `Dataset` (technically, once per `Dataset`-`Graph` combination, due to the vagaries of capturing state in `Dataset.make_one_shot_iterator()`), which means that all uses of the same dataset object will observe the same sequence of values. This change also revealed a small bug in how `Dataset.shuffle(..., reshuffle_each_iteration=False)` is serialized when an explicit seed is specified. The op-level seed was dropped, which could lead to non-deterministic behavior. This change fixes that issue by forwarding the op-level seed to the appropriate place. PiperOrigin-RevId: 216248013
* | Automated rollback of commit 295b3c80555cc82d8d70faf96a47681e1d904b9cGravatar Derek Murray2018-10-08
| | | | | | | | PiperOrigin-RevId: 216247929
* | Partial support tfe.defun in tf.gradients.Gravatar Alexandre Passos2018-10-08
| | | | | | | | | | | | | | | | Doesn't attempt to deal with cases where we might have already generated the functiondef for the parent function as in that case we cannot easily modify the forward pass. PiperOrigin-RevId: 216243224
* | Add timeout mechanism to Grappler meta optimizer. This is only a best-effort ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | mechanism, since the meta optimizer only checks if it has been cancelled before running each sub-optimizer. We can add cancellation to each sub-optimizer if necessary. PiperOrigin-RevId: 216234262
* | Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | PiperOrigin-RevId: 216217887
* | Merge pull request #21658 from lowintelligence:masterGravatar TensorFlower Gardener2018-10-08
|\ \ | | | | | | | | | PiperOrigin-RevId: 216217509
* | | [tf.data] Adding specialization for `MapDataset`, `ParallelMapDataset`, and ↵Gravatar Jiri Simsa2018-10-08
| | | | | | | | | | | | | | | | | | `MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor. PiperOrigin-RevId: 216206232
* | | Fix compilation in unique_op when Eigen::Index != int64.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | PiperOrigin-RevId: 216205396
* | | Enable PinToHostOptimizer.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | PiperOrigin-RevId: 216201732
* | | Optimize PinToHostOptimizer by adding cache, also add PinToHostOptimizer to ↵Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | benchmarks. original runtime: 4.83492736816 secs w/ cache runtime: 2.19033999443 secs PiperOrigin-RevId: 216195286
* | | Make ExecutorState preserve the thread context.Gravatar A. Unique TensorFlower2018-10-08
| | | | | | | | | | | | PiperOrigin-RevId: 216187878
* | | Update ops-related pbtxt files.Gravatar A. Unique TensorFlower2018-10-05
| | | | | | | | | | | | PiperOrigin-RevId: 216000752
* | | Merge pull request #22386 from girving:statelessGravatar TensorFlower Gardener2018-10-05
|\ \ \ | | | | | | | | | | | | PiperOrigin-RevId: 215995215
* | | | [tf.data vectorization] Feed inputs to vectorizers with notion of stackednessGravatar Rachel Lim2018-10-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215989259
| * | | Expand stateless random generators to match their stateful cousinsGravatar Geoffrey Irving2018-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stateless_random_uniform now take minval+maxval and handles ints, and stateless_normal/stateless_truncated_normal take mean+stddev. Additionally, all of the stateless functions now have proper doc strings. This is step one of moving stateless random numbers out of contrib.
* | | | Automated rollback of commit ae0bc6f006497cc04a2ee75166d4ec71c7154fd8Gravatar Jiri Simsa2018-10-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215969360
* | | | [tf.data] Adding specialization for `MapDataset`, `ParallelMapDataset`, and ↵Gravatar Jiri Simsa2018-10-05
| | | | | | | | | | | | | | | | | | | | | | | | `MapAndBatchDataset` whose user-provided functions have the property that each output argument take its value directly from an input argument (e.g. `lambda x, y: y, x`). This specialization can produce the result without having to schedule the function using the executor. PiperOrigin-RevId: 215957592
| | | * Clean up the code under INTEL_MKL_ML_ONLYGravatar shengfuintel2018-10-05
| | | |
* | | | Copy device from If op to the lowered ops.Gravatar Saurabh Saxena2018-10-05
| | | | | | | | | | | | | | | | | | | | | | | | Enable GPU tests for cond_v2. PiperOrigin-RevId: 215956220
* | | | Merge pull request #20476 from yongtang:06052018-bincount-shapeGravatar TensorFlower Gardener2018-10-05
|\ \ \ \ | | | | | | | | | | | | | | | PiperOrigin-RevId: 215947463
* | | | | Revert constant folding to previous state.Gravatar Tong Shen2018-10-05
| |_|_|/ |/| | | | | | | | | | | PiperOrigin-RevId: 215946205
* | | | Declare that stateless random ops are not differentiable in C++ code.Gravatar Tong Shen2018-10-05
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215935319
* | | | When running a native/builtin op via eager C API, automatically fill in defaultGravatar Mingsheng Hong2018-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | attr values that are not overridden e.g. transpose_a in the matmul op). This is required for backward compatibility (a binary built via an older version of TF should still run on a newer version of TF, where some ops may have added attrs). For non-eager graph building, the default attr values of graph ops are added by tensorflow::AddDefaultsToNodeDef(). We ran into this issue when running the same S4TF test cases via eager APIs -- some tests failed due to "missing attrs", but are fixed by this patch. PiperOrigin-RevId: 215927271
* | | | Pin ops with small integer inputs (already on the cpu) to the cpu in eager.Gravatar Akshay Modi2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | An environment variable (TF_EAGER_ENABLE_SMALL_TENSOR_CPU_PINNING) is provided to turn this off if necessary (its on by default). PiperOrigin-RevId: 215821915
* | | | Fix bug in Grappler constant folding: The logic detecting full reductions ↵Gravatar A. Unique TensorFlower2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | was flawed. Added better test coverage. Also added a extra test for a related symbolic shape inference operation that I first suspected to be broken. PiperOrigin-RevId: 215812753
* | | | Add apidefs for the list ops.Gravatar Dan Moldovan2018-10-04
| | | | | | | | | | | | | | | | PiperOrigin-RevId: 215802845
* | | | Error out when PartitionedCall is created with the wrong number of arguments.Gravatar Alexandre Passos2018-10-04
| | | | | | | | | | | | | | | | | | | | | | | | (used to be a segfault) PiperOrigin-RevId: 215791737