diff options
author | 2017-08-01 16:16:52 -0700 | |
---|---|---|
committer | 2017-08-01 16:21:29 -0700 | |
commit | 05c491d30888088873fedfbe81bca378c8c3fc87 (patch) | |
tree | d8779f7ceabeb6998c61c36fa9af1fe537c0a6e4 /tensorflow/examples/android | |
parent | e0108157af11bac4afd0ad1e7f2b07cd2fff2a7d (diff) |
Merge changes from github.
END_PUBLIC
---
Commit e62de3f78 authored by Kay Zhu<kayzhu@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Handle Reverse in HloEvaluator.
Also move HandleCopy to outer visitor instead, since it can be implemented
as a type-agnostic copy instead.
PiperOrigin-RevId: 163866499
---
Commit 96675956e authored by Asim Shankar<ashankar@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
C API: Avoid converting uninitialized tensorflow::Tensor to TF_Tensor*
And return error messages instead of CHECK failing when the conversion
fails.
PiperOrigin-RevId: 163863981
---
Commit 9593704b2 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Fix framework import function dependency.
PiperOrigin-RevId: 163863883
---
Commit 66f148542 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Improve performance of compilation by ~8% by speeding up the
hlo rematerialization pass.
Changes:
. Wrap each HloInstruction* inside an Item structure that keeps
associated data. This allows us to get rid of a bunch of
hash tables indexed by HloInstruction*.
* Switch to an intrusive linked list (instead of std::list) so
that we can avoid a hash table that maps to std::list::iterator.
* Use inlined vector in a few places.
PiperOrigin-RevId: 163848365
---
Commit 6d77a0129 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Hide NonMaxSuppression and NonMaxSuppressionV2 ops and add a python wrapper that sets a backwards compatible default value for iou_threshold.
PiperOrigin-RevId: 163844703
---
Commit 1a4499607 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Fix: add GDN to __init__. Also put it in alphabetical order.
PiperOrigin-RevId: 163842410
---
Commit db0e1c6c8 authored by Benoit Steiner<bsteiner@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Don't force inlining of functions marked no-inline
PiperOrigin-RevId: 163842238
---
Commit 18718b6f7 authored by Benoit Steiner<bsteiner@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Misc grappler improvements:
* Avoid copying optimized graphs since that takes time.
* Avoid optimizing a pruned graph, since it's already been pruned there isn't much to gain
PiperOrigin-RevId: 163842122
---
Commit 90abbf684 authored by Benoit Steiner<bsteiner@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Use OP_REQUIRES instead of an assertion to validate op arguments
PiperOrigin-RevId: 163841759
---
Commit 203c3f5fd authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Infer unknown shapes for functions in C++
As we are implementing function support through C API, the new code path
runs shape inference of Operations representing functions, but we don't
yet support shape inference for functions.
Before this change, adding a function NodeDef would result in error.
This change pairs all functions with a shape inference function that
sets all output shapes to unknown.
PiperOrigin-RevId: 163830793
---
Commit 3cc5fc088 authored by Chris Leary<leary@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Implement MirrorPad op.
Addresses #11890
* Improves the shape inference error message for concatenate.
* Adds a helper to Literal that gets an integral value converted to int64.
PiperOrigin-RevId: 163829437
---
Commit c7b674fa2 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
flatten_up_to should return values, not keys
PiperOrigin-RevId: 163809688
---
Commit 6209b4b52 authored by Asim Shankar<ashankar@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Minor refactoring the TF_Tensor <-> PyArray conversion functions.
PiperOrigin-RevId: 163802822
---
Commit 618f913bb authored by Yao Zhang<yaozhang@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Speed up topological sort by avoiding copies. The speedup is about 10-20%.
PiperOrigin-RevId: 163800134
---
Commit 6446895aa authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Remove and replace broken giflib download link
PiperOrigin-RevId: 163796393
---
Commit 9d5613088 authored by Chris Leary<leary@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA:CPU] Atomically enqueue tuple buffers for outfeed.
Previously it was possible that a distinct thread could hop in between the
buffer enqueues done by a tuple-outfeeding thread. This changes the sequence to
enqueue all the tuple buffers as an atomic unit.
PiperOrigin-RevId: 163781804
---
Commit b882d686f authored by Bjarke Hammersholt Roune<broune@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Allow cost estimates to differ per backend and include the estimates into the HLO profile. Add a summary table for what categories have the most opportunity for optimization left in them.
PiperOrigin-RevId: 163780413
---
Commit 14b736761 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Pass stats_collector when using SymbolicGradientOp.
PiperOrigin-RevId: 163773897
---
Commit 5202a5b6c authored by RJ Ryan<rjryan@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Fix some typos in StreamExecutor's cuFFT support.
PiperOrigin-RevId: 163771825
---
Commit edac90c7c authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add support to generate pprof results to tf.profiler
A fun thing is, it can not only profile time,memory
but also parameters, etc.
PiperOrigin-RevId: 163767517
---
Commit 78a90370e authored by Eli Bendersky<eliben@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Refactor CreateModuleConfig to share code between multiple call-sites.
Previously Service, LocalService and CompileOnlyService had their own code to
create a new HloModuleConfig, with much repetition (and some ommissions);
collect all these uses in a single method.
PiperOrigin-RevId: 163766869
---
Commit 6150611ae authored by Anna R<annarev@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Internal change.
PiperOrigin-RevId: 163765028
---
Commit 9e7875437 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add the option of including Shape, ShapeN, Size and Rank in the standard TensorFlow constant propagation pass, when the inputs to those Ops have sufficiently known static shape.
PiperOrigin-RevId: 163762750
---
Commit 8b1365bb4 authored by Yuefeng Zhou<yuefengz@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Infer output shape for restore op.
PiperOrigin-RevId: 163762216
---
Commit 2e2a8536d authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Update WorkerCacheLogger::RecordDataTransfer to not modify the details if provided.
PiperOrigin-RevId: 163761089
---
Commit d03ba54f7 authored by Yangzihao Wang<yangzihao@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Use BlasGemv() when autotune is not set.
PiperOrigin-RevId: 163754092
---
Commit 724884f1c authored by Justin Lebar<jlebar@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Show layouts in HLO graph dump.
Layouts are displayed as e.g. "f32[100,200]{0,1}". But constants used
to be displayed as e.g. "f32[]{42}". To avoid ambiguity, constants are
now displayed as e.g. "42 (f32[])".
Also gets rid of the xla_hlo_graph_layout flag, which is no longer
necessary since we're now showing layouts unconditionally.
PiperOrigin-RevId: 163753637
---
Commit 84c2757a6 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Move Grappler test GraphDefs to separate files
PiperOrigin-RevId: 163751948
---
Commit 0b3a25d68 authored by Asim Shankar<ashankar@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Make TF_RESOURCE memory handling consistent with other types.
TF_Tensor's are backed by a contiguous memory region for all
but TF_RESOURCE tensors. The memory management of TF_RESOURCE
tensors required keeping a backing tensorflow::ResourceHandle*
object alive for the lifetime of the TF_Tensor object.
This change removes that discrepancy, making the memory backing
TF_RESOURCE tensors self-contained. This simplifies use of TF_RESOURCE
tensors in the C API (as users of the C API do not need to worry about
a tensorflow::ResourceHandle object and its lifetime). In doing so, this
moves a string memory copy from the TF_Tensor <-> Numpy conversion
to the C API from the Python session helper.
Unfortunately, I couldn't figure out how to add a simple unittest in
c_api_test.cc. The more comprehensive tensorflow/python/kernel_tests/session_ops_test.py
does cover the changed lines though.
Additionally, avoid an unnecessary copy when creating TF_STRING or TF_RESOURCE
tensors (as eigen alignment is not a requirement for them).
PiperOrigin-RevId: 163751880
---
Commit 1333e7745 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Allow one tensor to be the input to the estimator.
PiperOrigin-RevId: 163747076
---
Commit 104f349e9 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Update Conv2DShape function to handle filters that have data NCHW_VECT_C layout.
PiperOrigin-RevId: 163746769
---
Commit efb7fb8e5 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Use XLA_VLOG_LINES() in literal_test_util to avoid truncation of large tensors.
PiperOrigin-RevId: 163745522
---
Commit 043505a09 authored by Suharsh Sivakumar<suharshs@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
MasterSession should throw error if use_per_session_threads or session_inter_op_thread_pool is set.
PiperOrigin-RevId: 163743936
---
Commit 6ba02f0e9 authored by Artem Belevich<tra@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] Added HasAllocationAt() helper function.
PiperOrigin-RevId: 163742985
---
Commit 18304683e authored by Justine Tunney<jart@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Pin tensorflow to tensorflow-tensorboard 0.1.x
This change will be cherry-picked into the 1.3.0 release.
PiperOrigin-RevId: 163742463
---
Commit 3445dd0ed authored by Justine Tunney<jart@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Make 'import tensorflow' go faster
It now takes about 400ms rather than 800ms, if the file system cache is warm.
Most of the latency was due to parsing text_format OpList protocol buffers in
our generated sources. We now use a binary representation, while preserving the
text proto as a comment for readability.
Note: This change does not improve the latency of dereferencing tf.contrib,
which takes about 340ms.
PiperOrigin-RevId: 163739355
---
Commit c215c55d5 authored by Neal Wu<wun@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add missing py_binary for mnist_deep.py
PiperOrigin-RevId: 163737503
---
Commit b663c9899 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Make non-iterable input to `stratified_sample` produce better error message.
PiperOrigin-RevId: 163735979
---
Commit 122750a87 authored by Peter Hawkins<phawkins@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[SE] Make ExecutorCache thread-safe, change ExecutorCache::Insert to ExecutorCache::GetOrCreate. Add support for creating Executors for different device ordinals in parallel.
[XLA] Create Executors in parallel.
PiperOrigin-RevId: 163734988
---
Commit 7ebed6678 authored by Frank Chen<frankchn@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add __init__.py to the contrib/cluster_resolver directory so that the Cluster Resolver classes within this are visible to open source TensorFlow users.
PiperOrigin-RevId: 163733781
---
Commit 21faf19d0 authored by Shanqing Cai<cais@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Upgrade pip version to 9.0.1; Prettier format of log text
* Upgrade pip version used in virtualenv created by the test-on-install to latest (9.0.1).
* Highlight step titles of pip builds with bold font.
PiperOrigin-RevId: 163732825
---
Commit 5887cc10e authored by Kay Zhu<kayzhu@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[XLA] In LiteralUtil::StridedConfig: choose the larger dimension between the
source and destination shapes' minor-most dimension index.
PiperOrigin-RevId: 163732014
---
Commit f9c644693 authored by Peter Hawkins<phawkins@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[TF:XLA] Disable sanitizers for depthwise conv test to fix test flakiness.
PiperOrigin-RevId: 163727765
---
Commit 6263539a1 authored by Allen Lavoie<allenl@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Grappler memory optimization: allow inputs to gradients with non-standard names to be recomputed
Includes Python tests for name-scoped gradients.
PiperOrigin-RevId: 163720208
---
Commit 4ac195669 authored by Benoit Steiner<benoitsteiner@users.noreply.github.com>
Committed by GitHub<noreply@github.com>:
Branch 163695881 (#11913)
* Prevent ctc_loss op from segfaulting when given empty batch.
PiperOrigin-RevId: 163663460
* New "SavedModel: Practical Uses" and "SavedModel: Architecture" documents.
PiperOrigin-RevId: 163669809
* Minor cleanup
PiperOrigin-RevId: 163685423
* Add regression variance over individual trees to TensorForest inference.
PiperOrigin-RevId: 163695881
---
Commit b876065af authored by Alexandre Passos<apassos@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
graph_to_function_def gets its own file
PiperOrigin-RevId: 163709410
---
Commit 29550762b authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Fixes unit tests for inverse hyperbolic functions that were failing because numeric gradients were computed too close to a branch cut (for complex arguments) or singularity (for real arguments) where the function is not differentiable (See, e.g., http://mathworld.wolfram.com/BranchCut.html). This change moves the test points away from the branch cut/singularity.
Improves precision of double precision numerical gradients by using a smaller step size delta (the optimal for symmetric difference approximation with functions computed with O(epsilon) error is epsilon^(1/3), so for double64 it is ~1e-5).
PiperOrigin-RevId: 163706297
---
Commit 99b190a1f authored by Peter Hawkins<phawkins@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
[TF:XLA] Add implementation of depthwise convolution.
This implementation expands the depthwise convolution kernels into a regular convolution kernel, which may not scale to large feature depths.
PiperOrigin-RevId: 163705408
---
Commit f6f07b027 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Allow kernel unit tests to run on GPU
PiperOrigin-RevId: 163705027
---
Commit 4ec29c5d9 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Avoid direct access to Node::def() where some other method works.
PiperOrigin-RevId: 163704839
---
Commit 153be4d26 authored by Luke Iwanski<luke@codeplay.com>
Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>:
[OpenCL] Stats tracking (#11523)
* Adds stat tracking to the SYCL allocator
The SYCLAllocator will now find the max allocation size on construction,
and keep track of the allocation stats, as given in AllocationStats.
* [OpenCL] Adds buffer size tracking to SYCL allocator (#114)
The SYCL buffers underlying tensors already keep track of their sizes,
so we can easily provide this tracking information for debugging
purposes.
---
Commit 8d642672f authored by Amit Patankar<amitpatankar@google.com>
Committed by Amit Patankar<amitpatankar@google.com>:
Disabling gmm_test.py on Windows builds as it's flaky on GPU nightly builds.
---
Commit 1560c55d2 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Add regression variance over individual trees to TensorForest inference.
PiperOrigin-RevId: 163695881
---
Commit 15e928d51 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Minor cleanup
PiperOrigin-RevId: 163685423
---
Commit f9c758719 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
New "SavedModel: Practical Uses" and "SavedModel: Architecture" documents.
PiperOrigin-RevId: 163669809
---
Commit f19bb3beb authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Prevent ctc_loss op from segfaulting when given empty batch.
PiperOrigin-RevId: 163663460
---
Commit 454fe936c authored by Taehoon Lee<taehoonlee@snu.ac.kr>
Committed by Taehoon Lee<taehoonlee@snu.ac.kr>:
Fix typos
---
Commit e17650b69 authored by Frank Chen<frankchn@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
This adds the cluster_resolver module to contrib/__init__.py so that it is actually visible to open source TensorFlow users.
PiperOrigin-RevId: 163636676
---
Commit 926c0f6ee authored by ??<awsomekde@gmail.com>
Committed by GitHub<noreply@github.com>:
fix minor typo
---
Commit 00d3126a3 authored by Yao Zhang<yaozhang@google.com>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
Change const nodes to variables in the test, so that they are not optimized
away by the grappler constant folding pass.
PiperOrigin-RevId: 163602405
---
Commit 1c7c9c716 authored by Aditya Dhulipala<aditya.d@hotmail.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Minor typo correction (#11874)
---
Commit f91a3546e authored by Sergii Khomenko<x-sam@brainscode.com>
Committed by Vijay Vasudevan<vrv@google.com>:
Fix a minor typo (#11873)
---
Commit adf5d1bc0 authored by A. Unique TensorFlower<gardener@tensorflow.org>
Committed by TensorFlower Gardener<gardener@tensorflow.org>:
BEGIN_PUBLIC
Automated g4 rollback of changelist 163510186
PiperOrigin-RevId: 163902859
Diffstat (limited to 'tensorflow/examples/android')
6 files changed, 522 insertions, 253 deletions
diff --git a/tensorflow/examples/android/src/org/tensorflow/demo/CameraActivity.java b/tensorflow/examples/android/src/org/tensorflow/demo/CameraActivity.java index 27d7e41487..b9542582e6 100644 --- a/tensorflow/examples/android/src/org/tensorflow/demo/CameraActivity.java +++ b/tensorflow/examples/android/src/org/tensorflow/demo/CameraActivity.java @@ -19,22 +19,37 @@ package org.tensorflow.demo; import android.Manifest; import android.app.Activity; import android.app.Fragment; +import android.content.Context; import android.content.pm.PackageManager; +import android.graphics.Bitmap; +import android.hardware.Camera; +import android.hardware.camera2.CameraAccessException; +import android.hardware.camera2.CameraCharacteristics; +import android.hardware.camera2.CameraManager; +import android.hardware.camera2.params.StreamConfigurationMap; +import android.media.Image; import android.media.Image.Plane; +import android.media.ImageReader; import android.media.ImageReader.OnImageAvailableListener; import android.os.Build; import android.os.Bundle; import android.os.Handler; import android.os.HandlerThread; +import android.os.Trace; import android.util.Size; import android.view.KeyEvent; import android.view.WindowManager; import android.widget.Toast; import java.nio.ByteBuffer; + +import org.tensorflow.demo.env.ImageUtils; import org.tensorflow.demo.env.Logger; + +// Explicit import needed for internal Google builds. import org.tensorflow.demo.R; -public abstract class CameraActivity extends Activity implements OnImageAvailableListener { +public abstract class CameraActivity extends Activity implements OnImageAvailableListener, Camera. + PreviewCallback { private static final Logger LOGGER = new Logger(); private static final int PERMISSIONS_REQUEST = 1; @@ -46,6 +61,20 @@ public abstract class CameraActivity extends Activity implements OnImageAvailabl private Handler handler; private HandlerThread handlerThread; + private boolean useCamera2API; + protected Bitmap rgbFrameBitmap = null; + private int[] rgbBytes = null; + protected int previewWidth = 0; + protected int previewHeight = 0; + protected Bitmap croppedBitmap = null; + protected static final boolean SAVE_PREVIEW_BITMAP = false; + protected long lastProcessingTimeMs; + protected Bitmap cropCopyBitmap; + protected ResultsView resultsView; + protected boolean computing = false; + protected Runnable postInferenceCallback; + protected byte[][] yuvBytes=new byte[3][]; + protected int yRowStride; @Override protected void onCreate(final Bundle savedInstanceState) { @@ -62,6 +91,93 @@ public abstract class CameraActivity extends Activity implements OnImageAvailabl } } + /** + * Callback for android.hardware.Camera API + */ + @Override + public void onPreviewFrame(final byte[] bytes, final Camera camera) { + if (computing) { + return; + } + computing = true; + yuvBytes[0] = bytes; + try { + // Initialize the storage bitmaps once when the resolution is known. + if (rgbBytes == null) { + Camera.Size previewSize = camera.getParameters().getPreviewSize(); + previewHeight = previewSize.height; + previewWidth = previewSize.width; + rgbBytes = new int[previewWidth * previewHeight]; + onPreviewSizeChosen(new Size(previewSize.width, previewSize.height), 90); + } + ImageUtils.convertYUV420SPToARGB8888(bytes, rgbBytes, previewWidth, previewHeight, false); + } catch (final Exception e) { + LOGGER.e(e, "Exception!"); + return; + } + postInferenceCallback = new Runnable() { + @Override + public void run() { + camera.addCallbackBuffer(bytes); + } + }; + processImageRGBbytes(rgbBytes); + } + + /** + * Callback for Camera2 API + */ + @Override + public void onImageAvailable(final ImageReader reader) { + Image image = null; + //We need wait until we have some size from onPreviewSizeChosen + if (previewWidth == 0 || previewHeight == 0) { + return; + } + rgbBytes = new int[previewWidth * previewHeight]; + try { + image = reader.acquireLatestImage(); + + if (image == null) { + return; + } + + if (computing) { + image.close(); + return; + } + computing = true; + Trace.beginSection("imageAvailable"); + final Plane[] planes = image.getPlanes(); + fillBytes(planes, yuvBytes); + yRowStride = planes[0].getRowStride(); + final int uvRowStride = planes[1].getRowStride(); + final int uvPixelStride = planes[1].getPixelStride(); + ImageUtils.convertYUV420ToARGB8888( + yuvBytes[0], + yuvBytes[1], + yuvBytes[2], + rgbBytes, + previewWidth, + previewHeight, + yRowStride, + uvRowStride, + uvPixelStride, + false); + image.close(); + + } catch (final Exception e) { + if (image != null) { + image.close(); + } + LOGGER.e(e, "Exception!"); + Trace.endSection(); + return; + } + processImageRGBbytes(rgbBytes); + Trace.endSection(); + } + @Override public synchronized void onStart() { LOGGER.d("onStart " + this); @@ -123,8 +239,8 @@ public abstract class CameraActivity extends Activity implements OnImageAvailabl switch (requestCode) { case PERMISSIONS_REQUEST: { if (grantResults.length > 0 - && grantResults[0] == PackageManager.PERMISSION_GRANTED - && grantResults[1] == PackageManager.PERMISSION_GRANTED) { + && grantResults[0] == PackageManager.PERMISSION_GRANTED + && grantResults[1] == PackageManager.PERMISSION_GRANTED) { setFragment(); } else { requestPermission(); @@ -135,7 +251,8 @@ public abstract class CameraActivity extends Activity implements OnImageAvailabl private boolean hasPermission() { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { - return checkSelfPermission(PERMISSION_CAMERA) == PackageManager.PERMISSION_GRANTED && checkSelfPermission(PERMISSION_STORAGE) == PackageManager.PERMISSION_GRANTED; + return checkSelfPermission(PERMISSION_CAMERA) == PackageManager.PERMISSION_GRANTED && + checkSelfPermission(PERMISSION_STORAGE) == PackageManager.PERMISSION_GRANTED; } else { return true; } @@ -143,25 +260,80 @@ public abstract class CameraActivity extends Activity implements OnImageAvailabl private void requestPermission() { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { - if (shouldShowRequestPermissionRationale(PERMISSION_CAMERA) || shouldShowRequestPermissionRationale(PERMISSION_STORAGE)) { - Toast.makeText(CameraActivity.this, "Camera AND storage permission are required for this demo", Toast.LENGTH_LONG).show(); + if (shouldShowRequestPermissionRationale(PERMISSION_CAMERA) || + shouldShowRequestPermissionRationale(PERMISSION_STORAGE)) { + Toast.makeText(CameraActivity.this, + "Camera AND storage permission are required for this demo", Toast.LENGTH_LONG).show(); } requestPermissions(new String[] {PERMISSION_CAMERA, PERMISSION_STORAGE}, PERMISSIONS_REQUEST); } } + // Returns true if the device supports the required hardware level, or better. + boolean isHardwareLevelSupported(CameraCharacteristics characteristics, int requiredLevel) { + int deviceLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL); + if (deviceLevel == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) { + return requiredLevel == deviceLevel; + } + // deviceLevel is not LEGACY, can use numerical sort + return requiredLevel <= deviceLevel; + } + + private String chooseCamera() { + final CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE); + try { + for (final String cameraId : manager.getCameraIdList()) { + final CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId); + + // We don't use a front facing camera in this sample. + final Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING); + if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) { + continue; + } + + final StreamConfigurationMap map = + characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP); + + if (map == null) { + continue; + } + + useCamera2API = isHardwareLevelSupported(characteristics, + CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL); + LOGGER.i("Camera API lv2?: %s", useCamera2API); + return cameraId; + } + } catch (CameraAccessException e) { + LOGGER.e(e, "Not allowed to access camera"); + } + + return null; + } + protected void setFragment() { - final Fragment fragment = - CameraConnectionFragment.newInstance( - new CameraConnectionFragment.ConnectionCallback() { - @Override - public void onPreviewSizeChosen(final Size size, final int rotation) { - CameraActivity.this.onPreviewSizeChosen(size, rotation); - } - }, - this, - getLayoutId(), - getDesiredPreviewFrameSize()); + String cameraId = chooseCamera(); + + Fragment fragment; + if (useCamera2API) { + CameraConnectionFragment camera2Fragment = + CameraConnectionFragment.newInstance( + new CameraConnectionFragment.ConnectionCallback() { + @Override + public void onPreviewSizeChosen(final Size size, final int rotation) { + previewHeight = size.getHeight(); + previewWidth = size.getWidth(); + CameraActivity.this.onPreviewSizeChosen(size, rotation); + } + }, + this, + getLayoutId(), + getDesiredPreviewFrameSize()); + + camera2Fragment.setCamera(cameraId); + fragment = camera2Fragment; + } else { + fragment = new LegacyCameraConnectionFragment(this, getLayoutId()); + } getFragmentManager() .beginTransaction() @@ -213,6 +385,7 @@ public abstract class CameraActivity extends Activity implements OnImageAvailabl return super.onKeyDown(keyCode, event); } + protected abstract void processImageRGBbytes(int[] rgbBytes ) ; protected abstract void onPreviewSizeChosen(final Size size, final int rotation); protected abstract int getLayoutId(); protected abstract Size getDesiredPreviewFrameSize(); diff --git a/tensorflow/examples/android/src/org/tensorflow/demo/CameraConnectionFragment.java b/tensorflow/examples/android/src/org/tensorflow/demo/CameraConnectionFragment.java index 76bd61d00f..986f2777b2 100644 --- a/tensorflow/examples/android/src/org/tensorflow/demo/CameraConnectionFragment.java +++ b/tensorflow/examples/android/src/org/tensorflow/demo/CameraConnectionFragment.java @@ -353,58 +353,44 @@ public class CameraConnectionFragment extends Fragment { super.onPause(); } + public void setCamera(String cameraId) { + this.cameraId = cameraId; + } + /** * Sets up member variables related to camera. - * - * @param width The width of available size for camera preview - * @param height The height of available size for camera preview */ - private void setUpCameraOutputs(final int width, final int height) { + private void setUpCameraOutputs() { final Activity activity = getActivity(); final CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE); try { - for (final String cameraId : manager.getCameraIdList()) { - final CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId); - - // We don't use a front facing camera in this sample. - final Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING); - if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) { - continue; - } - - final StreamConfigurationMap map = - characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP); - - if (map == null) { - continue; - } - - // For still image captures, we use the largest available size. - final Size largest = - Collections.max( - Arrays.asList(map.getOutputSizes(ImageFormat.YUV_420_888)), - new CompareSizesByArea()); - - sensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION); - - // Danger, W.R.! Attempting to use too large a preview size could exceed the camera - // bus' bandwidth limitation, resulting in gorgeous previews but the storage of - // garbage capture data. - previewSize = - chooseOptimalSize( - map.getOutputSizes(SurfaceTexture.class), - inputSize.getWidth(), - inputSize.getHeight()); - - // We fit the aspect ratio of TextureView to the size of preview we picked. - final int orientation = getResources().getConfiguration().orientation; - if (orientation == Configuration.ORIENTATION_LANDSCAPE) { - textureView.setAspectRatio(previewSize.getWidth(), previewSize.getHeight()); - } else { - textureView.setAspectRatio(previewSize.getHeight(), previewSize.getWidth()); - } - - CameraConnectionFragment.this.cameraId = cameraId; + final CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId); + + final StreamConfigurationMap map = + characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP); + + // For still image captures, we use the largest available size. + final Size largest = + Collections.max( + Arrays.asList(map.getOutputSizes(ImageFormat.YUV_420_888)), + new CompareSizesByArea()); + + sensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION); + + // Danger, W.R.! Attempting to use too large a preview size could exceed the camera + // bus' bandwidth limitation, resulting in gorgeous previews but the storage of + // garbage capture data. + previewSize = + chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class), + inputSize.getWidth(), + inputSize.getHeight()); + + // We fit the aspect ratio of TextureView to the size of preview we picked. + final int orientation = getResources().getConfiguration().orientation; + if (orientation == Configuration.ORIENTATION_LANDSCAPE) { + textureView.setAspectRatio(previewSize.getWidth(), previewSize.getHeight()); + } else { + textureView.setAspectRatio(previewSize.getHeight(), previewSize.getWidth()); } } catch (final CameraAccessException e) { LOGGER.e(e, "Exception!"); @@ -425,7 +411,7 @@ public class CameraConnectionFragment extends Fragment { * Opens the camera specified by {@link CameraConnectionFragment#cameraId}. */ private void openCamera(final int width, final int height) { - setUpCameraOutputs(width, height); + setUpCameraOutputs(); configureTransform(width, height); final Activity activity = getActivity(); final CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE); diff --git a/tensorflow/examples/android/src/org/tensorflow/demo/ClassifierActivity.java b/tensorflow/examples/android/src/org/tensorflow/demo/ClassifierActivity.java index bc39126925..ab48e2265b 100644 --- a/tensorflow/examples/android/src/org/tensorflow/demo/ClassifierActivity.java +++ b/tensorflow/examples/android/src/org/tensorflow/demo/ClassifierActivity.java @@ -22,21 +22,21 @@ import android.graphics.Canvas; import android.graphics.Matrix; import android.graphics.Paint; import android.graphics.Typeface; -import android.media.Image; -import android.media.Image.Plane; -import android.media.ImageReader; + import android.media.ImageReader.OnImageAvailableListener; import android.os.SystemClock; -import android.os.Trace; import android.util.Size; import android.util.TypedValue; import android.view.Display; + import java.util.List; import java.util.Vector; import org.tensorflow.demo.OverlayView.DrawCallback; import org.tensorflow.demo.env.BorderedText; import org.tensorflow.demo.env.ImageUtils; import org.tensorflow.demo.env.Logger; + +// Explicit import needed for internal Google builds. import org.tensorflow.demo.R; public class ClassifierActivity extends CameraActivity implements OnImageAvailableListener { @@ -64,39 +64,25 @@ public class ClassifierActivity extends CameraActivity implements OnImageAvailab private static final String INPUT_NAME = "input"; private static final String OUTPUT_NAME = "output"; + private static final String MODEL_FILE = "file:///android_asset/tensorflow_inception_graph.pb"; private static final String LABEL_FILE = "file:///android_asset/imagenet_comp_graph_label_strings.txt"; - private static final boolean SAVE_PREVIEW_BITMAP = false; private static final boolean MAINTAIN_ASPECT = true; private static final Size DESIRED_PREVIEW_SIZE = new Size(640, 480); - private Classifier classifier; private Integer sensorOrientation; - - private int previewWidth = 0; - private int previewHeight = 0; - private byte[][] yuvBytes; - private int[] rgbBytes = null; - private Bitmap rgbFrameBitmap = null; - private Bitmap croppedBitmap = null; - - private Bitmap cropCopyBitmap; - - private boolean computing = false; - + private Classifier classifier; private Matrix frameToCropTransform; private Matrix cropToFrameTransform; - private ResultsView resultsView; private BorderedText borderedText; - private long lastProcessingTimeMs; @Override protected int getLayoutId() { @@ -112,9 +98,8 @@ public class ClassifierActivity extends CameraActivity implements OnImageAvailab @Override public void onPreviewSizeChosen(final Size size, final int rotation) { - final float textSizePx = - TypedValue.applyDimension( - TypedValue.COMPLEX_UNIT_DIP, TEXT_SIZE_DIP, getResources().getDisplayMetrics()); + final float textSizePx = TypedValue.applyDimension( + TypedValue.COMPLEX_UNIT_DIP, TEXT_SIZE_DIP, getResources().getDisplayMetrics()); borderedText = new BorderedText(textSizePx); borderedText.setTypeface(Typeface.MONOSPACE); @@ -129,7 +114,6 @@ public class ClassifierActivity extends CameraActivity implements OnImageAvailab INPUT_NAME, OUTPUT_NAME); - resultsView = (ResultsView) findViewById(R.id.results); previewWidth = size.getWidth(); previewHeight = size.getHeight(); @@ -141,15 +125,13 @@ public class ClassifierActivity extends CameraActivity implements OnImageAvailab sensorOrientation = rotation + screenOrientation; LOGGER.i("Initializing at size %dx%d", previewWidth, previewHeight); - rgbBytes = new int[previewWidth * previewHeight]; rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Config.ARGB_8888); croppedBitmap = Bitmap.createBitmap(INPUT_SIZE, INPUT_SIZE, Config.ARGB_8888); - frameToCropTransform = - ImageUtils.getTransformationMatrix( - previewWidth, previewHeight, - INPUT_SIZE, INPUT_SIZE, - sensorOrientation, MAINTAIN_ASPECT); + frameToCropTransform = ImageUtils.getTransformationMatrix( + previewWidth, previewHeight, + INPUT_SIZE, INPUT_SIZE, + sensorOrientation, MAINTAIN_ASPECT); cropToFrameTransform = new Matrix(); frameToCropTransform.invert(cropToFrameTransform); @@ -165,52 +147,7 @@ public class ClassifierActivity extends CameraActivity implements OnImageAvailab }); } - @Override - public void onImageAvailable(final ImageReader reader) { - Image image = null; - - try { - image = reader.acquireLatestImage(); - - if (image == null) { - return; - } - - if (computing) { - image.close(); - return; - } - computing = true; - - Trace.beginSection("imageAvailable"); - - final Plane[] planes = image.getPlanes(); - fillBytes(planes, yuvBytes); - - final int yRowStride = planes[0].getRowStride(); - final int uvRowStride = planes[1].getRowStride(); - final int uvPixelStride = planes[1].getPixelStride(); - ImageUtils.convertYUV420ToARGB8888( - yuvBytes[0], - yuvBytes[1], - yuvBytes[2], - previewWidth, - previewHeight, - yRowStride, - uvRowStride, - uvPixelStride, - rgbBytes); - - image.close(); - } catch (final Exception e) { - if (image != null) { - image.close(); - } - LOGGER.e(e, "Exception!"); - Trace.endSection(); - return; - } - + protected void processImageRGBbytes(int[] rgbBytes ) { rgbFrameBitmap.setPixels(rgbBytes, 0, previewWidth, 0, 0, previewWidth, previewHeight); final Canvas canvas = new Canvas(croppedBitmap); canvas.drawBitmap(rgbFrameBitmap, frameToCropTransform, null); @@ -219,7 +156,6 @@ public class ClassifierActivity extends CameraActivity implements OnImageAvailab if (SAVE_PREVIEW_BITMAP) { ImageUtils.saveBitmap(croppedBitmap); } - runInBackground( new Runnable() { @Override @@ -227,15 +163,19 @@ public class ClassifierActivity extends CameraActivity implements OnImageAvailab final long startTime = SystemClock.uptimeMillis(); final List<Classifier.Recognition> results = classifier.recognizeImage(croppedBitmap); lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime; - + LOGGER.i("Detect: %s", results); cropCopyBitmap = Bitmap.createBitmap(croppedBitmap); + if (resultsView==null) { + resultsView = (ResultsView) findViewById(R.id.results); + } resultsView.setResults(results); requestRender(); computing = false; + if (postInferenceCallback != null) { + postInferenceCallback.run(); + } } }); - - Trace.endSection(); } @Override diff --git a/tensorflow/examples/android/src/org/tensorflow/demo/DetectorActivity.java b/tensorflow/examples/android/src/org/tensorflow/demo/DetectorActivity.java index 5800f80651..acace0eace 100644 --- a/tensorflow/examples/android/src/org/tensorflow/demo/DetectorActivity.java +++ b/tensorflow/examples/android/src/org/tensorflow/demo/DetectorActivity.java @@ -66,7 +66,7 @@ public class DetectorActivity extends CameraActivity implements OnImageAvailable // must be manually placed in the assets/ directory by the user. // Graphs and models downloaded from http://pjreddie.com/darknet/yolo/ may be converted e.g. via // DarkFlow (https://github.com/thtrieu/darkflow). Sample command: - // ./flow --model cfg/tiny-yolo-voc.cfg --load bin/tiny-yolo-voc.weights --savepb --verbalise=True + // ./flow --model cfg/tiny-yolo-voc.cfg --load bin/tiny-yolo-voc.weights --savepb --verbalise private static final String YOLO_MODEL_FILE = "file:///android_asset/graph-tiny-yolo-voc.pb"; private static final int YOLO_INPUT_SIZE = 416; private static final String YOLO_INPUT_NAME = "input"; @@ -126,6 +126,7 @@ public class DetectorActivity extends CameraActivity implements OnImageAvailable tracker = new MultiBoxTracker(this); + if (USE_YOLO) { detector = TensorFlowYoloDetector.create( @@ -270,15 +271,17 @@ public class DetectorActivity extends CameraActivity implements OnImageAvailable final int uvRowStride = planes[1].getRowStride(); final int uvPixelStride = planes[1].getPixelStride(); ImageUtils.convertYUV420ToARGB8888( - yuvBytes[0], - yuvBytes[1], - yuvBytes[2], - previewWidth, - previewHeight, - yRowStride, - uvRowStride, - uvPixelStride, - rgbBytes); + yuvBytes[0], + yuvBytes[1], + yuvBytes[2], + rgbBytes, + previewWidth, + previewHeight, + yRowStride, + uvRowStride, + uvPixelStride, + false); + image.close(); } catch (final Exception e) { @@ -344,6 +347,8 @@ public class DetectorActivity extends CameraActivity implements OnImageAvailable Trace.endSection(); } + protected void processImageRGBbytes(int[] rgbBytes ) {} + @Override protected int getLayoutId() { return R.layout.camera_connection_fragment_tracking; diff --git a/tensorflow/examples/android/src/org/tensorflow/demo/LegacyCameraConnectionFragment.java b/tensorflow/examples/android/src/org/tensorflow/demo/LegacyCameraConnectionFragment.java new file mode 100644 index 0000000000..e5b3eeeceb --- /dev/null +++ b/tensorflow/examples/android/src/org/tensorflow/demo/LegacyCameraConnectionFragment.java @@ -0,0 +1,208 @@ +package org.tensorflow.demo; + +/* + * Copyright 2014 The Android Open Source Project + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +import android.app.Fragment; +import android.graphics.SurfaceTexture; +import android.os.Bundle; +import android.os.Handler; +import android.os.HandlerThread; +import android.util.SparseIntArray; +import android.view.LayoutInflater; +import android.view.Surface; +import android.view.TextureView; +import android.view.View; +import android.view.ViewGroup; + +import java.io.IOException; + +import android.hardware.Camera; +import android.hardware.Camera.CameraInfo; + +import org.tensorflow.demo.env.Logger; + +// Explicit import needed for internal Google builds. +import org.tensorflow.demo.R; + +public class LegacyCameraConnectionFragment extends Fragment { + + private Camera camera; + private static final Logger LOGGER = new Logger(); + private Camera.PreviewCallback imageListener; + + /** + * The layout identifier to inflate for this Fragment. + */ + private int layout; + + public LegacyCameraConnectionFragment( + final Camera.PreviewCallback imageListener, + final int layout) { + this.imageListener = imageListener; + this.layout = layout; + } + + /** + * Conversion from screen rotation to JPEG orientation. + */ + private static final SparseIntArray ORIENTATIONS = new SparseIntArray(); + + static { + ORIENTATIONS.append(Surface.ROTATION_0, 90); + ORIENTATIONS.append(Surface.ROTATION_90, 0); + ORIENTATIONS.append(Surface.ROTATION_180, 270); + ORIENTATIONS.append(Surface.ROTATION_270, 180); + } + + /** + * {@link android.view.TextureView.SurfaceTextureListener} handles several lifecycle events on a + * {@link TextureView}. + */ + private final TextureView.SurfaceTextureListener surfaceTextureListener = + new TextureView.SurfaceTextureListener() { + @Override + public void onSurfaceTextureAvailable( + final SurfaceTexture texture, final int width, final int height) { + + int index = getCameraId(); + camera = Camera.open(index); + + try { + Camera.Parameters parameters = camera.getParameters(); + parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE); + + camera.setDisplayOrientation(90); + camera.setParameters(parameters); + camera.setPreviewTexture(texture); + } catch (IOException exception) { + camera.release(); + } + + camera.setPreviewCallbackWithBuffer(imageListener); + Camera.Size s = camera.getParameters().getPreviewSize(); + int bufferSize = s.height * s.width * 3 / 2; + camera.addCallbackBuffer(new byte[bufferSize]); + + textureView.setAspectRatio(s.height, s.width); + + camera.startPreview(); + } + + @Override + public void onSurfaceTextureSizeChanged( + final SurfaceTexture texture, final int width, final int height) { + } + + @Override + public boolean onSurfaceTextureDestroyed(final SurfaceTexture texture) { + return true; + } + + @Override + public void onSurfaceTextureUpdated(final SurfaceTexture texture) { + } + }; + + /** + * An {@link AutoFitTextureView} for camera preview. + */ + private AutoFitTextureView textureView; + + /** + * An additional thread for running tasks that shouldn't block the UI. + */ + private HandlerThread backgroundThread; + + @Override + public View onCreateView( + final LayoutInflater inflater, final ViewGroup container, final Bundle savedInstanceState) { + return inflater.inflate(layout, container, false); + } + + @Override + public void onViewCreated(final View view, final Bundle savedInstanceState) { + textureView = (AutoFitTextureView) view.findViewById(R.id.texture); + } + + @Override + public void onActivityCreated(final Bundle savedInstanceState) { + super.onActivityCreated(savedInstanceState); + } + + @Override + public void onResume() { + super.onResume(); + startBackgroundThread(); + // When the screen is turned off and turned back on, the SurfaceTexture is already + // available, and "onSurfaceTextureAvailable" will not be called. In that case, we can open + // a camera and start preview from here (otherwise, we wait until the surface is ready in + // the SurfaceTextureListener). + + if (textureView.isAvailable()) { + camera.startPreview(); + } else { + textureView.setSurfaceTextureListener(surfaceTextureListener); + } + } + + @Override + public void onPause() { + stopCamera(); + stopBackgroundThread(); + super.onPause(); + } + + /** + * Starts a background thread and its {@link Handler}. + */ + private void startBackgroundThread() { + backgroundThread = new HandlerThread("CameraBackground"); + backgroundThread.start(); + } + + /** + * Stops the background thread and its {@link Handler}. + */ + private void stopBackgroundThread() { + backgroundThread.quitSafely(); + try { + backgroundThread.join(); + backgroundThread = null; + } catch (final InterruptedException e) { + LOGGER.e(e, "Exception!"); + } + } + + protected void stopCamera() { + if (camera != null) { + camera.stopPreview(); + camera.setPreviewCallback(null); + camera.release(); + camera = null; + } + } + + private int getCameraId() { + CameraInfo ci = new CameraInfo(); + for (int i = 0; i < Camera.getNumberOfCameras(); i++) { + Camera.getCameraInfo(i, ci); + if (ci.facing == CameraInfo.CAMERA_FACING_BACK) + return i; + } + return -1; // No camera found + } +} diff --git a/tensorflow/examples/android/src/org/tensorflow/demo/StylizeActivity.java b/tensorflow/examples/android/src/org/tensorflow/demo/StylizeActivity.java index 7afe2bf541..58dd5c6069 100644 --- a/tensorflow/examples/android/src/org/tensorflow/demo/StylizeActivity.java +++ b/tensorflow/examples/android/src/org/tensorflow/demo/StylizeActivity.java @@ -28,6 +28,7 @@ import android.graphics.Paint; import android.graphics.Paint.Style; import android.graphics.Rect; import android.graphics.Typeface; +import android.hardware.Camera; import android.media.Image; import android.media.Image.Plane; import android.media.ImageReader; @@ -58,6 +59,8 @@ import org.tensorflow.demo.OverlayView.DrawCallback; import org.tensorflow.demo.env.BorderedText; import org.tensorflow.demo.env.ImageUtils; import org.tensorflow.demo.env.Logger; + +// Explicit import needed for internal Google builds. import org.tensorflow.demo.R; /** @@ -97,10 +100,6 @@ public class StylizeActivity extends CameraActivity implements OnImageAvailableL private int previewWidth = 0; private int previewHeight = 0; - private byte[][] yuvBytes; - private int[] rgbBytes = null; - private Bitmap rgbFrameBitmap = null; - private Bitmap croppedBitmap = null; private final float[] styleVals = new float[NUM_STYLES]; private int[] intValues; @@ -108,18 +107,13 @@ public class StylizeActivity extends CameraActivity implements OnImageAvailableL private int frameNum = 0; - private Bitmap cropCopyBitmap; private Bitmap textureCopyBitmap; - private boolean computing = false; - private Matrix frameToCropTransform; private Matrix cropToFrameTransform; private BorderedText borderedText; - private long lastProcessingTimeMs; - private TensorFlowInferenceInterface inferenceInterface; private int lastOtherStyle = 1; @@ -363,9 +357,8 @@ public class StylizeActivity extends CameraActivity implements OnImageAvailableL @Override public void onPreviewSizeChosen(final Size size, final int rotation) { - final float textSizePx = - TypedValue.applyDimension( - TypedValue.COMPLEX_UNIT_DIP, TEXT_SIZE_DIP, getResources().getDisplayMetrics()); + final float textSizePx = TypedValue.applyDimension( + TypedValue.COMPLEX_UNIT_DIP, TEXT_SIZE_DIP, getResources().getDisplayMetrics()); borderedText = new BorderedText(textSizePx); borderedText.setTypeface(Typeface.MONOSPACE); @@ -393,7 +386,6 @@ public class StylizeActivity extends CameraActivity implements OnImageAvailableL grid = (GridView) findViewById(R.id.grid_layout); grid.setAdapter(adapter); grid.setOnTouchListener(gridTouchAdapter); - setStyle(adapter.items[0], 1.0f); } @@ -455,78 +447,42 @@ public class StylizeActivity extends CameraActivity implements OnImageAvailableL } } - @Override - public void onImageAvailable(final ImageReader reader) { - Image image = null; - - try { - image = reader.acquireLatestImage(); - - if (image == null) { - return; - } - - if (computing) { - image.close(); - return; - } - - if (desiredSize != initializedSize) { - LOGGER.i( - "Initializing at size preview size %dx%d, stylize size %d", - previewWidth, previewHeight, desiredSize); - rgbBytes = new int[previewWidth * previewHeight]; - rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Config.ARGB_8888); - croppedBitmap = Bitmap.createBitmap(desiredSize, desiredSize, Config.ARGB_8888); - - frameToCropTransform = - ImageUtils.getTransformationMatrix( - previewWidth, previewHeight, - desiredSize, desiredSize, - sensorOrientation, true); - - cropToFrameTransform = new Matrix(); - frameToCropTransform.invert(cropToFrameTransform); - - yuvBytes = new byte[3][]; - - intValues = new int[desiredSize * desiredSize]; - floatValues = new float[desiredSize * desiredSize * 3]; - initializedSize = desiredSize; - } - - computing = true; - - Trace.beginSection("imageAvailable"); - - final Plane[] planes = image.getPlanes(); - fillBytes(planes, yuvBytes); + private void resetPreviewBuffers() { + croppedBitmap = Bitmap.createBitmap(desiredSize, desiredSize, Config.ARGB_8888); - final int yRowStride = planes[0].getRowStride(); - final int uvRowStride = planes[1].getRowStride(); - final int uvPixelStride = planes[1].getPixelStride(); + frameToCropTransform = ImageUtils.getTransformationMatrix( + previewWidth, previewHeight, + desiredSize, desiredSize, + sensorOrientation, true); - ImageUtils.convertYUV420ToARGB8888( - yuvBytes[0], - yuvBytes[1], - yuvBytes[2], - previewWidth, - previewHeight, - yRowStride, - uvRowStride, - uvPixelStride, - rgbBytes); + cropToFrameTransform = new Matrix(); + frameToCropTransform.invert(cropToFrameTransform); + yuvBytes = new byte[3][]; + intValues = new int[desiredSize * desiredSize]; + floatValues = new float[desiredSize * desiredSize * 3]; + initializedSize = desiredSize; + } - image.close(); - } catch (final Exception e) { - if (image != null) { - image.close(); - } - LOGGER.e(e, "Exception!"); - Trace.endSection(); - return; + protected void processImageRGBbytes(int[] rgbBytes ) { + if (desiredSize != initializedSize) { + LOGGER.i( + "Initializing at size preview size %dx%d, stylize size %d", + previewWidth, previewHeight, desiredSize); + + rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Config.ARGB_8888); + croppedBitmap = Bitmap.createBitmap(desiredSize, desiredSize, Config.ARGB_8888); + frameToCropTransform = ImageUtils.getTransformationMatrix( + previewWidth, previewHeight, + desiredSize, desiredSize, + sensorOrientation, true); + + cropToFrameTransform = new Matrix(); + frameToCropTransform.invert(cropToFrameTransform); + yuvBytes = new byte[3][]; + intValues = new int[desiredSize * desiredSize]; + floatValues = new float[desiredSize * desiredSize * 3]; + initializedSize = desiredSize; } - rgbFrameBitmap.setPixels(rgbBytes, 0, previewWidth, 0, 0, previewWidth, previewHeight); final Canvas canvas = new Canvas(croppedBitmap); canvas.drawBitmap(rgbFrameBitmap, frameToCropTransform, null); @@ -536,24 +492,24 @@ public class StylizeActivity extends CameraActivity implements OnImageAvailableL ImageUtils.saveBitmap(croppedBitmap); } - runInBackground( - new Runnable() { - @Override - public void run() { - cropCopyBitmap = Bitmap.createBitmap(croppedBitmap); - - final long startTime = SystemClock.uptimeMillis(); - stylizeImage(croppedBitmap); - lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime; - - textureCopyBitmap = Bitmap.createBitmap(croppedBitmap); - - requestRender(); - computing = false; - } - }); - - Trace.endSection(); + runInBackground(new Runnable() { + @Override + public void run() { + cropCopyBitmap = Bitmap.createBitmap(croppedBitmap); + final long startTime = SystemClock.uptimeMillis(); + stylizeImage(croppedBitmap); + lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime; + textureCopyBitmap = Bitmap.createBitmap(croppedBitmap); + requestRender(); + computing = false; + if (postInferenceCallback != null) { + postInferenceCallback.run(); + } + } + }); + if (desiredSize != initializedSize) { + resetPreviewBuffers(); + } } private void stylizeImage(final Bitmap bitmap) { @@ -584,6 +540,7 @@ public class StylizeActivity extends CameraActivity implements OnImageAvailableL } // Copy the input data into TensorFlow. + LOGGER.i("Width: %s , Height: %s",bitmap.getWidth(),bitmap.getHeight()); inferenceInterface.feed( INPUT_NODE, floatValues, 1, bitmap.getWidth(), bitmap.getHeight(), 3); inferenceInterface.feed(STYLE_NODE, styleVals, NUM_STYLES); |