Snap is an image search application, created as an interview project. Essentially, it uses the Pexels API to retrieve a search result, and then the images in the search result are downloaded and displayed in a grid in the app.
I like to follow processes that help me in app development. While developing this app, I tried to mock the process model phases like they do in agile. Though before starting any of these processes, I did a lot of research in the Android user guides and documentations. Then I started with the specification phase. Here, I created a Software Requirements Specification (SRS) document, which really helped in acting like a checklist of things I needed to do. Then, in the design phase, I designed a user interface while considering other professional applications I use daily. The search bar was inspired by the Google Play app, and the image grid was partially inspired by Instagram. I then followed this by creating a UML class diagram and activity transition flowchart. Whenever I start projects, I always try to create UML diagrams as it helps me think of the application in little details, but as a whole as well. I then proceeded with the implementation. Unfortunately, I did not go through test-driven development as I do not know how to do tests in Android yet and I did not have unlimited time to learn everything I could while doing this project. Most of my testing was done through logs and just using the app. I definitely do plan to do test-driven development in my next personal project, where I will pretty much have unlimited time to experiment with it.
The android guides recommend a software architecture that separates the UI components from the data components. Looking back on the past android apps I have experimented with, I can now see that my past code was garbage. Some of the data driven components were controlling the UI, in which case I would have to make so many changes in so many different components if I did indeed have to modify it. I also found that I placed a lot of work in the main thread.
But getting back to the architecture, the recommended way to design the components is to have UI components communicate with a ViewModel, which communicates with a Repository, which then communicates with web services or persistent storage services. If you do not know what any of these are, basically, the ViewModel holds instances of the data that the UI needs. The ViewModel then calls on the Repository to fetch necessary data. In the ViewModel, it is recommended to use LiveData components as it uses the observer pattern while being lifecycle aware. As for the recommended architecture, here is a diagram of it:
And here is what my application’s architecture looks like:
Also, here is a link to the ViewModel overview (https://developer.android.com/topic/libraries/architecture/viewmodel) and LiveData overview (https://developer.android.com/topic/libraries/architecture/livedata) if you would like to learn more about it.
As for the web services, I used Retrofit to fetch the data and RxJava to separate data-fetching to another thread.
Features I focused on
The features I focused on and how they work:
- Varying screen orientation compatibility
- I designed the UI such that the UI components adapt to its orientation
- The images in the grid are downloaded with dimensions in which it will be displayed on, where there are 3 images per row in portrait mode, and 6 images per row in landscape mode
- I used “dp” units to size the UI components
- I designed the UI such that the UI components adapt to its orientation
- Varying screen size and density compatibility
- The application determines what dimensions to download images in based on the screen size in pixels
- Image searching
- Search results are fetched, then all thumbnails are fetched in a 1:1 aspect ratio and displayed in a grid
- Image viewing
- Upon choosing an image item, a new screen is displayed which initiates a download of the image in its original aspect ratio and at the size it is displayed at
- Progress indication
- Indeterminate progress bars are made visible or invisible depending on a LiveData component that indicates whether a search is in progress
- Error indication
- A LiveData component also indicates whether an error has been made, in which the UI is updated (in this case, a Toast message is displayed)
- Image information displaying
Features I planned for but did not implement
- Pinch to zoom in images
- Drag around images while zoomed in
- Button fade in/out on touch in the view image screen
Possible additional features that can be added
I’ve thought up of some additional features and somewhat designed the application so that these can be added later on. I say somewhat because I’m not completely sure how much work it would be while actually implementing it.
The features and how they can be added are:
- Search and view videos
- Each media object are represented by a superclass called “Pexel Element”, and I implemented the subclass “Pexel Image” to represent the images. “Pexel Video” can be implemented to represent videos.
- An option to choose videos in the UI can be added.
- The search result for both pictures and videos have a lot of elements in common. It is possible to implement the fetching of video search result without having to create a whole new class and logic to it.
- The display of thumbnail grids can be used for videos as well, so that activity does not have to be changed much.
- When a “Pexel Video” is then selected in the grid of video selection, the video data can then be downloaded within the screen that shows the video. New components must be added in this case as the image object is not the same as the video object.
- Search by voice
- I believe this can be added without too much difficulty as I used a SearchView object to represent the search bar. Voice is a built in feature in SearchViews.
- This can be implemented by removing unwanted items from the results list, though I actually believe this might not be practical for this specific application(?) Since I am using a data source I have no control over, I will not be able to control what is downloaded into the application. I may be wrong but I think filtering might be better done on the server side. It is definitely possible to do in the application side though.
- Configurable settings
- I have thought of two settings that may be implemented into this application. These are controlling how many results to fetch per page, and allowing the ability to turn search as the user types on or off. These shouldn’t be difficult to implement. It can be probably implemented by doing a few changes, other than creating the UI for it.
Possible better implementations
There are some components in which I ponder whether I should have implemented it differently. One example would be the downloading of the images. Currently, all the retrofit calls for thumbnails have been merged into one observable, which then downloads these thumbnails one after another in a thread (not the main thread). This is how I initially designed the application. Though, thinking back on it, I am thinking it might’ve been better to design it such that each thumbnail has a separate observable and each one is downloaded in their own thread. This will probably significantly improve the application’s performance. Currently, no thumbnail displays until all thumbnails are downloaded. However, if I were to download each separately, I can display a placeholder until each one is downloaded. In this case, the thumbnails will show up at different times. This is just an idea I have not tried though. I do not know how it will actually perform in the real world.
Possible bugs or just annoyances
There are some parts of the application that may be considered as bugs, but are mostly just annoyances to the user. These include:
- A search initiates every time the screen orientation changes, which means the user’s scroll position is deleted; the user must start from the beginning
- Pressing the back button while the keyboard is on during a search does not remove focus from the search bar. This means that every time the user comes back to this screen, the keyboard comes out again. I was able to remove focus on search submit but could not find the solution to the back button yet.
Choosing not to fix or implement
Why have I not implemented these additional features or fixed these annoyances? Time. This is an interview project and although it does not have a hard deadline, the longer it takes to submit it, the less my chances will be.
Things don’t always go according to plan
That’s right. It’s probably due to lack of experience, but sometimes things must change. In my initial design, I designed it such that the thumbnail and actual image shown while viewing the image was separate. It still is, and I believe they should be so that thumbnail download doesn’t take too much time. I think it is better to download smaller images to increase performance, now that I have finished the application. However, I also had the not so great idea of downloading it all at once initially. That is, download both the thumbnail and actual image all at the same time, before showing the thumbnails to the user. I found that this method took an incredible amount of time; hence, I had to change the initial plan. Instead of downloading both the thumbnail and actual image, I then downloaded all the thumbnails when showing it to the user, and then only download the actual image when it is selected and shown to the user. This was also so much better in terms of memory. Only one actual image is downloaded rather than all of them.
Another focus: Readability
Readability is probably one of the most important aspects of software development. Although this is not an application that will be maintained and looked at by a lot of people, I still believe how readable it is is extremely important. How did I improve readability?
- Explanatory class, function, and variable names, even if they are long
- Making functions as small as can be
- Making individual classes as small as can be
- Placing everything that can be placed in a function into a function
These bullets are the ideas that stood out to me the most while reading Clean Code by Uncle Bob.
I actually did not have much of a hard time in creating this app. Most of the time. Although, when I got to the part of learning and using the recyclerview-selection library, that was another matter. It’s not that it was really difficult to understand, I just needed some examples to understand how to use it and there was not many. There were also some terms in the documentations and guides that I did not really understand, and I could not find what they refer to. It did take a lot of trial and error to get it to work, but in the end, I was able to.
Things I learned
I learned an incredible amount creating this application, with application architecture probably being the most important one. Upon starting this application, I did not know how to download images. I did not know how to use RxJava or Retrofit, and to be honest, I did not even know Retrofit existed. I learned about the observer pattern, and am now planning to learn other patterns I will likely use in my software engineering journey. This experience has helped me so much as I have been planning to create and release an android application. I hope in time, I will have become a much better android developer and have created much more projects.
Thank you for reading this post!