Some Extended Sharing About volley Sharing
This article does not explain the basic usage of volley and how to request data. The purpose of this article is to learn some optimization of Volley for us.
- Simple Source Code Analysis
- Practical extension
I. Simple Source Code Analysis
- First, look at Volley's entry, Volley's newRequestQueue method.
public class Volley {
private static final String DEFAULT_CACHE_DIR ="volley";
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
//The above paragraph specifies the cache directory, and so on.
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
Network network = new BasicNetwork(stack);
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
}
}
2. The two newRequestQueue() methods are overloaded above.
DiskBasedCache configures the cache directory of the request results, etc.
Specify the request mode network. Generally speaking, the system version is adopted to default the request mode, HttpUrlConnection is used based on Android 2.3 and above, and httpClient is used below. httpClient has been removed in Android 6.0, and jar package of httpClient needs to be introduced.
3. Return a RequestQueue object through Volley's newRequestQueue method and call a start() method
Let's look at the RequestQueue class. First, let's look at the following two important attributes:
private final PriorityBlockingQueue<Request<?>> mCacheQueue =
new PriorityBlockingQueue<Request<?>>();
Store Requse queues that require results from the cache
private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
new PriorityBlockingQueue<Request<?>>();
Store Rquest queues that require results from the network
Priority BlockingQueue: Threads are blocked when no element is de-stacked, that is, when take() is executed and no Request is found, threads are blocked.
4. Let's look at the way RequestQueue is constructed. There are several. Let's look at what the framework is configured by default.
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
By default: From the RequestQueue constructor, Volley can update the UI in the request result callback listener because it handles the results to the handler of the main thread, and see Executor Delivery later.
5. Let's look at RequestQueue's start() method
public void start() {
stop();
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
Look at the previous code to open a thread mCacheDispatcher that reads results from the cache and four threads mNetwork Dispatcher that reads from the network. Of course, you can also specify the number of openings by the constructor. It is recommended that the number of openings be set according to the device configuration.
6. Since the framework gives us the default start of two threads. Let's see what the thread's run() method does, starting with the mCacheDispatcher's run() method.
@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
mCache.initialize();
while (true) {
try {
final Request<?> request = mCacheQueue.take();
request.addMarker("cache-queue-take");
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
mNetworkQueue.put(request);
continue;
}
if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");
if (!entry.refreshNeeded()) {
mDelivery.postResponse(request, response);
} else {
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
response.intermediate = true;
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
}
}
});
}
} catch (InterruptedException e) {
if (mQuit) {
return;
}
continue;
}
}
}
The above code roughly means that the current thread will block if there is no Request to execute, that is, the mCacheQueue.take() method will block. If the current request is not cancelled, continue to judge that the local cache is not expired and get the result mDelivery.postResponse() directly from the cache file to the mainline. Cheng, or not, is added to the network request queue to execute the network request.
Let's look at what the run() method of the network request thread NetworkDispatcher does.
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
while (true) {
long startTimeMs = SystemClock.elapsedRealtime();
Request<?> request;
try {
request = mQueue.take();
} catch (InterruptedException e) {
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
}
addTrafficStatsTag(request);
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
request.markDelivered();
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
VolleyError volleyError = new VolleyError(e);
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
mDelivery.postError(request, volleyError);
}
}
}
The above code probably means to open a dead loop to extract the Request from the network request queue. When there is no Request to extract, the current thread is blocked if the current request is not cancelled. Continue to execute the mNetwork.performRequest method to get the results from the network. After getting the results from the network, according to whether the configured Request needs caching, then mDelivery.postResponse() to the main thread.
7. Next, let's look at how to return the returned results to the main thread. Also, when we construct RequestQueue, we create an Executor Delivery constructor parameter, which is the handler of the main thread. Let's look at the Executor Delivery class specifically.
public class ExecutorDelivery implements ResponseDelivery {
private final Executor mResponsePoster;
public ExecutorDelivery(final Handler handler) {
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}
public ExecutorDelivery(Executor executor) {
mResponsePoster = executor;
}
@Override
public void postResponse(Request<?> request, Response<?> response) {
postResponse(request, response, null);
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
@Override
public void postError(Request<?> request, VolleyError error) {
request.addMarker("post-error");
Response<?> response = Response.error(error);
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));
}
mResponsePoster implements the Executor interface and overrides the execute method. The execute() method calls handler.post(command) to post the task to the main thread for execution.
Implementing ResponseDelivery Runnable Task in PosResponse
From the run Method of Response Delivery Runnable
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}
Hand over the processing results to Request, call delivery Response () in the correct response, and call delivery Error () in the error response.
8. Let's talk about the Request class, which is an abstract class. Let's look at two abstract methods, parseNetworkResponse and deliverResponse.
parseNetworkResponse parses the results back, deliverResponse processes the returned results, such as sub-classes jsonRequst and StringRequest have different implementations, in general, we also inherit Requst to parse and process the results we want. As far as the official gives us an implementation class
@Override
protected Response<JSONObject> parseNetworkResponse(NetworkResponse response) {
try {
String jsonString = new String(response.data,
HttpHeaderParser.parseCharset(response.headers, PROTOCOL_CHARSET));
return Response.success(new JSONObject(jsonString),
HttpHeaderParser.parseCacheHeaders(response));
} catch (UnsupportedEncodingException e) {
return Response.error(new ParseError(e));
} catch (JSONException je) {
return Response.error(new ParseError(je));
}
}
Return the returned byte response.data and convert it to a string to be returned as a JSONObject.
Then you can listen back to the data results through callbacks inside, similar to
public void deliverError(VolleyError error) {
if (mErrorListener != null) {
mErrorListener.onErrorResponse(error);
}
}
9. Next, let's see how Request is added to the queue. Let's look at the following code
public <T> Request<T> add(Request<T> request) {
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else { mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}
This means putting Request in a set set set to facilitate cancellation and other operations, and then judging whether the request needs to be cached, adding it to networkQueue and jumping out of the method if it does not need caching, or putting it in cacheQueue, or adding a piece of code that is difficult to understand when cacheQueue is executed, and requesting the same URL when cacheQueue is fetched. The cache is placed in the waiting queue Waiting Requests, and is added to the cache Queue all at once after the request request request for the same URL has been executed.
Summary: From the source code, volley is easy to expand, Interface-oriented programming, decoupling. It also helps us to maintain the request queue and queuing mechanism, save a lot of trouble and so on.
The disadvantage is that it can't download and upload well. The reason may be that it costs too much to read large files into memory to convert all data sources into byte arrays. Of course, we can also expand as needed.