Post 10 -Reflection and proposition

Before the discussion on my proposal that I had with my peer Liz, I was already a little unsure of what I was creating, or what my actual outcome was to be. Through further research before the session, I had discovered or rather specified exactly what was my specific issue dealing with 18-25 year olds, and what I wasn’t the outcome to do.

It was clear through research that the  monitoring and data collection wasn’t going to end or let up any time soon, especially not with the inclusion of the Internet of Things. And so rather than designing a possibility to end the monitoring on wither end, it was decided that my proposal was aiming at creating awareness of the increased privacy issues, and get round adults to spread the word or understand the Internet of Things. Thus, my proposal was to create awareness, to educate, or to inform.

I didn’t exactly have an actual proposal idea to run through with my peer in the class session. I had a few ideas floating around that I had picked out from the brainstorming session around the 5 possibilities to create change, however I wasn’t sold on a particular one. And so, in the session with Liz, I decided to quickly run through my 5 ideas–quite briefly–and figure out if a particular one caught her eye.

None really did, or they weren’t at a point to yet.

However she was quite startled and intrigued by a story I told her that I founding a news article. Basically, the gist of the story was that a young woman had extremely private and intimate personal data collected from a product of hers, when she had no idea she was being monitored. This snippet sparked both our interests, as it really portrayed the idea that public entities such as business and companies can collected very private data from us without our knowledge, in very private settings and environments. Who knew that you could be monitored through products in your house or bedroom.

Even thought I didn’t have an exact proposal, she did give me some advice and feedback on the ones that I did have, and brainstormed other ideas with me.

The first piece of information the she gave me was that she like the idea of creating awareness or informing the generation of the lack of privacy. We both felt that the monitoring wasn’t going to stop, and luckily she agreed with me. And so this now became the focus of what I wanted my system or design to do ultimately.

Due to the short story that I had told her, and the fact that she was quite shocked by the invasive nature, she felt that it could be a good idea to focus on a specific set of data to help ground the proposal or make it more emotional. While being specific like personal details could have worked, she suggested that I look into creating a proposal around the really private data that we have, such as in the story told. This notion also helped to develop my proposal as there are lots of ways that we give out private data, however most of the time we know we are giving it out. So I thought it could b interesting to focus on the times that we are unaware that we are providing private and personal data, such as in the Internet of Things.

Another piece of information or critique that Liz provided to me was to place whatever my issue or proposal was, into a real world content. Place it in an area, a time, a place, a social setting. And that way, whatever my proposal ends up to be, it will be relatable to the generation or audience the tis it being designed for. Immediately, this made me think of social media and anything online, and also of the bedroom. People always say, or at least imply, that our bedrooms are a visual expression of who we are; our interests, loves, personalities etc. So why not place my proposal in the context of the bedroom and online. There isn’t one person that I don’t know that doesn’t use their phone at least once a day in their bedroom, or doesn’t use a single piece of technology or a product daily. If I had to look around my room, I would at least see a computer, a laptop, an iPad, iPod, phone, Nintendo 3DS, Wacom tablet etc. So it’s fair to say that this setting could work effectively for my target audience.

The final piece of information that we discussed was another port of WHY? Why was I wanting to create something like this? Why were they to interact with it? Why was I thinking of a service design over any of the other emergent practices? The gist of our conversation was that I want people to care. Care about their privacy, care about what information they are putting out there, and care about who is viewing it. So along with the basis of informing the audience or making them aware of the Internet of Things, I really wanted to find a way to make them care.

This session was very helpful as I was able to get another brain on my issue. I could work out if things were working and whether I was in a correct direction, or if I had completely lost the plot. It also taught me (again!) that everybody thinks differently. What I figure could be an excellent idea, could be terrible for someone else or vice versa. I understand exactly now why there is usually user testing and prototyping along the way for all projects.

So now for my revised proposal—

Growing up in the age of Technology, 18-25 year olds have witnessed the rise of the Internet and its wide spread use. And in todays society we are being introduced to the Internet of Things, a system where all devices and products will have the ability to connect to the Internet and feed information to their suppliers and companies. However, users this age aren’t aware of the Internet of Things or its increased invasion of privacy. While they don’t necessarily care about their online privacy, they know what personal information should or shouldn’t be posted. The problem becomes the increased invasion of data monitoring with which we are unaware of in public and private spaces.

Since the internet is so ingrained in our daily lives, ending the data collection and monitoring isn’t a possibility. Instead, the change would be to create awareness and inform this generation of the increased potential for data monitoring with the inclusion of the Internet of Things. The change should get them to think differently about the Internet of Things and what products could be linked and connected, as well as how they interact with their private and personal environments. The change should start a conversation between this generation, for them to continue to spread the word.

Which brings me to my possible design action. The Unseen, or Unseen Connections (the name is pending), is a service design that aims to create change. The proposal is an augmented reality app that shows or reveals the unseen connections that products and devices have to the Internet of Things. The user could be introduced to the app through a social media hashtag that sets up the campaign and encourages them to see their ‘home’s Internet of Things’. After answering a few questions, and inputing parameters for daily use, the app then accesses the phones camera and superimposes graphics and lines over the real life image. The app reveals what devices are or could be connected, revealing to the user the possibility for data monitoring and collection. After this, the app also provides tips of ensuring your privacy in the Internet of Things, especially your bedroom, based on the results seen in the camera. From here, the user is then encouraged to continue the conversation, and spread a link or the hashtag to their friends and peers on social media. Reveal the connections, be informed or shocked, and spread the word.

Proposal visualisations

Post 9 -Visual documentation of the brainstorming session

Group brainstorm of possibilities of change

The image above depicts the brainstorming session that our group had around my issue. It was decided early on, that having individual pages for each issue would invite us to throw any and all ideas on the page, and encourage us to fill the space with possibilities.

Another rule for the group initiated early in the process was that there was to be no judgement with regards to the ideas conveyed. This ensured that it was quantity being created rather than quality (a particularly strange concept to wrap your brain around when the whole course has been about the quality of work and concept).

With these rules in mind, we began to brainstorm each others problem statements individually. Spending around 15 minutes on each person, we spoke about the possibility we were imagining, and then wrote them down. Often times one idea would spark another, and branches of ideas similar to each other would be created.

What I found good and useful about this process of brainstorming was that I managed to get different perspectives on my problem and issue, and provide ideas from an outside point of view. For the past 7 weeks I have mostly been the only one researching and developing my issue, so to have people brainstorm visual responses as if they were possible users, was a great and useful experience. The process also allowed for undiscovered concepts and visuals to come to light. There were some ideas mentioned that I hadn’t thought about, and managed to spur different thoughts.

However, there were some down sides to this brainstorming process also. The main disadvantage was that the problem statement that I had wasn’t well researched and I didn’t have a sufficient understanding of the issue, this was because it was spurred from a comment of one of my peers. It would have been better to originally choose the internet f things like I had been researching, to get actual concepts and possible responses I could have developed. The other slight issue that I discovered with this process was that my peers did’t have a great understanding of the issue as well. It way have just been that I didn’t explain certain parts of it correctly or well enough, but seeing as data issues generally aren’t talked about, it was hard to brainstorm solutions.

Overall, the process was helpful in providing more eyes to bounce ideas off and see what they would do in my situation, however it would have been more effective if I had chosen a more researched (and possibly broader) topic in order to get ideas to develop.

After the slight disaster of my part of the group brainstorming session, I decided to do further research and try the exercise again. Since the Internet of Things was a focus  for the past few weeks, I decided to create another problem statement, but with privacy and the Internet of Things as the centre of the exercise.

New problem statements

With the map above, I felt like I had a better idea of my concept and problem, and could create more possibilities for change. Or at least there were more opportunities to look at. And so, with the top right map being a little tight, I recreated it on a larger page, and kept developing visual responses and ideas.

New possibilities of change

While it was great to redo the class and group exercise of brainstorming the possibilities for change, doing it by myself lacked the group experience and the opportunities created by having multiple eyes on the issue. The next step would be to get another person to briefly look at the ideas presented, and see if they can add some, or change any that are existing.

Post 8 – Brainstorming possibilities for a design response

After weeks of researching, it now came time to start thinking about the end game. What can I turn all of this research into, and what kind of design response could be created?

The individual and collaborative tasks that were undertaken in class were very helpful–if only I had a good grasp on a specific data issue! The first section of the exercise was to individually develop a problem statement. Throughout the research process, I hadn’t investigated a specific issue within data privacy and surveillance. And although the Internet of Things was somewhat specific, at the time of the problem statement it didn’t feel specific enough. So with a brief discussion of issues and topics with a peer, the issue of patient data came to mind as a specific concept that was also present in the research. With this brief topic in mind, I tried to develop a problem statement.

Initial problem statements

However, it almost seemed too specific (topic / user wise), and was probably too long. Shortened, it came down to patients wanting control over their heath data. It was almost too specific because there wasn’t a lot of room for interpretation or response development because the topic was too small in terms of who it involved and creative solutions. I almost needed something more varied and broad that could also be specified in certain situations.

But I powered on with the specific patient data, and used the problem in the next stage of the task which was to brainstorm any and all visual design responses to the problem statement that were of an emergent practice. This was difficult as not only did I not have a lot of understanding of the problem and key characteristics, but there was nothing to clearly explain the problem to my peers.

Even still, we brainstormed for around ten minutes and came up with few possibilities. Not nearly enough to develop a good proposal for though.

Group brainstorm of possibilities of change

After taking a week off the research and development, I wanted to ty the exercises again. So after doing some more secondary research, and going back to my original topic of the Internet of Things, I developed new problem statements, and brainstormed new possibilities for visual design responses.

Initial problem statements
Initial problem statements

It was decided that the responses would be around education, warning and limiting the problem rather than stopping it, as the data privacy controversy won’t end any time soon while the Internet of Things is active and growing.

Five ideas stood out as the most possible and interesting, as well as the best responses to the problem.

While the emergent practices were in the foreground of my mind, I feel that some of the responses may need a greater connection to one of them.

  1. A data visualisation on the places that you would get targeted / monitored, or what types of data would be collected if a particular suburb or local area were to be a smart city in the Internet of Things.
  2. A new service / policy for companies, governments and businesses to comply to. Like Microsoft’s DNT.
  3. An opt-in / opt-out system / service that could act as a way to be a part of the data collection and monitoring as little or great as you want (limits).
  4. A data visualisation on how much of ‘YOU’ can be collected through the Internet of Things data collection / monitoring.
  5. A service that aims to spread the awareness of the Internet of Things around the home, especially with regards to public entities monitoring your private data without you knowing about it.

I tried to keep the same mindset of the process taken in the original brainstorm session in class: there is no judgement, the aim is quantity over quality, and it shouldn’t be too hard (in terms of how the concept can progress or be adopted. It would have been good to have another person to bounce ideas, however the time frame left me short.

From here, it was time to determine a particular response that fitted best into one of the emergent practices, and had the most possibilities for change. It came to my attention that the solution was not going to be to stop the monitoring or end the tracking of private data as it is already too prevalent in today’s society. What is needed, is a way to create limits on the collection and monitoring of data, so that the users are given part of the control. Or at least there could be a compromise.

One possibility seemed the most interesting and direct in creating an intervention: number 3, an opt-in / opt-out system / service that could act as a way to be a part of the data collection and monitoring as little or great as you want (creating limits). Being in the so called technological age or generation, 18-25 year olds have grown up with technology and the internet. They have seen it been born and grow into a gigantic virtual world that is used daily. However, with all of this growth and use, some things have been lost. With the terms and conditions of online websites being so long and in such fine print, they are generally skipped over and forgotten about. Or on the other hand, the terms and conditions are deliberately placed in hard to find areas on sites. What is needed here is a system that is in the control of the user. And so, this proposal aims to give control back to the user by creating an opt-in / opt-out service. For every site (or connected product / place in the internet of things), users could be presented with a short form, or a button that transforms into a slightly longer form. The concept is, that through a standardized form or set of questions, the user could state how much or how little or particular things they would want to be tracked. This way, instead of just stating ‘track’ or ‘don’t track’, they can be involved in some aspects, none at all, or only for particular companies / products they trust. There is also potential for the system to go further and block particular details of the user, so their online persona turns into a bunch of statistics rather than a digital personality. The tracking and monitoring control would be up to the user.

While this could be a solid idea, it is only a draft at this point and could (or most likely would) change in the near future.

Post 7 – Issue Mapping

Co-creation has always been a slightly terrifying concept. However, it is also sometimes a relief. This post will explore my experience with co-creation in mapping controversies and actor profiles, in the data privacy sector.

The first task to work through was yet another mapping exercise around data privacy and its stakeholders. Except this time, in pairs. While this was an easy enough task to complete, both of us had slightly different understandings of what we were to do. With our previous individual and group maps by our sides, my partner was just recreating it with the same stakeholders, while I was trying to be more specific. Who exactly interacts with data and online privacy, and what specific parties are affected by all its facets. Part of this process was helpful as it provided me with a different perspective on the issue and those involved, but the other part of it was also difficult as no two people think alike, so instructions got lost in the mix.

Remapping the stakeholders

The next task was to map the controversies surrounding the topic of data and privacy. This task was a better use of the co-creation as it really explored many different facets of the topic. While my research was looking into ownership and the internet of things, my partner’s research was delving into personal data, especially with regards to mobile applications. Therefore, many different specific issues were being covered, and the controversies–or polemic–map could be all inclusive. What worked the best here was just writing it down on the paper. What do they feel? What do they feel that way? What would the opposite side of this polemic feel and why? A confirmation that is was relevant to the topic was often stated, however the process just called for as many controversies as were possible. This ‘no-judgement’ policy was accepted throughout the tasks.

Polemic map

Following the polemic map, the co-creation took on a more hands-on approach with the mapping of a particular polemic. ‘Ownership’ was the chosen polemic, as it had more possibilities in terms of where it lied in context, and who it affected. This stage of the co-creation workshop proved to be a little difficult. It was excellent to have another person’s ideas and train of thought, however, like earlier, we had slightly different notions on what was to be mapped. A conclusion was made here that even though it was a ‘co-creation’ task, someone needed to take the lead to keep the thoughts flowing, and pens moving. So while I took charge over the task, the ‘no-judgement’ policy was still in effect. However, the process of mapping the stakeholders, emotions and motivations to a specific polemic assisted in the development of a facet of data privacy. In other words, it helped develop an understanding of a specific situation.

‘Ownership’ polemic map

The next stage brought in another couple, building the co-creation group. While this initially seemed like a worse outcome giving the slight problems of just being in a couple, it actually proved to be easier. The conclusion early on was that the more hands (or brains) the more possibilities that can be created. And in terms of the task itself, it was enlightening to think of all the actors that play a role, or are affected in the data privacy sphere. Selecting the polemic of ‘ownership’, the task was to categorise all the actors present in the issue in terms of objects, emotions, representations, identities and other groups. What was interesting with this process was that it was thinking about the same human and non-human stakeholders, but going beyond what they are and looking at what they do. As Rogers, Sánchez-Querubín and Kil explore in ‘Issue Mapping for an Ageing Europe’, controversies should be taken as the starting point, and from there the focus is on the struggle, the action and the movement (p. 16). In other words, going beyond just what the stakeholders are, and looking at how they affect or are effected by particular polemics. It was also interesting to think of this map as a connection between human and non-human actors. As Rogers, Sánchez-Querubín and Kil pharaphrases Latour, ‘map not just human-to-human connections or object-to-object ones, but the zigzag from one to the other.’ (p.17). And that is where the interesting lies.

‘Ownership’ actors map


The following and final task further expands on the actors’ map, however more puts us (the researcher) in the shoes of an actor. The task: to choose an actor, and portray them through certain characteristics. Who do they associate with? What are they responsible for? Whose values do they align with? This exercise certainly put you in the shoes of the actor you choose, mine being the hacktivist group Anonymous. While I had some idea of who they were and what they did, having the platform of co-creation helped develop a good character for Anonymous, and discover things that generally wouldn’t have been common thought such as their feelings, communications and motivations. Below is the collection of all actors mapped out in our group.

Specific actors maps

The particular section on social mapping in the ‘Issue Mapping for an Ageing Europe’ reading also assisted in understanding this task. It was the paragraph about the two types of actors: the intermediator who is predictable and doesn’t transform anything, and the mediator, whose outcome is unpredictable and includes transformation, distortion or translations of meaning and elements. Such things as hardware can generally be called an intermediator, but change something about it, or alter its state, and it can become a mediator. This is known as an ‘action to create change’. In terms of data and privacy, as well as ownership, this action could be that further education is needed in to the issue. This could be in the form of a poster or flyer, or even an additional screen before application logins that explicitly asks whether you want to be tracked or not. It could be an opt-out form that allows you to no donate data you don’t want to. The action to create change could be as simple as a login screen or a blocking product, or as complicated as a system or service that acts as a data trust to protect your data that you ultimately create. The possibilities could be endless.



Rogers, R., Sánchez-Querubín, N. & Kil, A. 2015, Issue Mapping for an Ageing Europe, Amsterdam University Press, Amsterdam.

Following the completion of this class and mapping exercises, I wanted to go back and try some of these tasks again. Further along in the process, my focus in data and privacy was becoming a little clouded, so I used these tasks to bring myself back into focus. Below are image of those efforts.

Remapping the stakeholders
Remapping the stakeholders
Remapping the stakeholders
Remapping the stakeholders

Post 6 – Scraping the web for data; Twitter

Twitter is an interesting program and media. It is a global source that is accessible to anyone that has the internet or a mobile phone, and due to this it redefined the time span for news to be spread or broken. If you want to get a story broken, or spread news about a particular topic, Twitter is your best friend. You aren’t following your particular recipient? No problem. As long as you have an account you can opinion-ate or inform anyone’s eye off–even if it’s not amongst the popular topics of pop culture, technology, breaking news, or politics. Through its hashtags and trending topics, Twitter is easy to navigate, and files everything into neat little boxes–fitted with further hashtags acting as sub-topics.

But what makes Twitter unique? What steps it away from every other social media that keeps people connected and allows sharing? Twitter users are restricted to a 140-character limit in every post. This may sound easy to overcome, but not so much when trying to condense complex readings into a short sentence. Generally used to spread breaking news, natural or human disasters or popular issues, this restriction allows for the point to get across immediately. While keeping it concise means your attention is grabbed instantly, the challenge is shaping the post so that is still makes sense. There is nothing worse than a post with very important words, but nothing connecting them. But the tone of the post also contributes. Most of the posts on Twitter can fall into two categories: opinionated (and biased), or informative (and educated).

With all of this in mind, it was time to undertake the web scraping task. Originally, the Twitter Advanced search paired with the Twitter Archiver Add-on seemed like the ideal program or tool to use. Not only was this task needed, I wanted to use it for my benefit, and expand on my knowledge of the Internet of Things and data privacy in general. The process of scraping the data with the Twitter Advanced search and archiver were simple: the words ‘data’ and ‘ownership’ must be present, and ‘privacy’ was a keyword that could pop up. However, this didn’t turn up much, and it felt that the search was moving away from the original intended issue. A few posts back, the Internet of Things was the focus or specific issue within data that was being investigated. In trying to get back on track, more secondary research was conducted, as well as a repeat of previous class exercises. By doing this, I would hopefully get back onto an issue that was talked about more, and that I could possibly create some visual design responses for.

So here comes the tool Brand24: an online program that business can use to monitor what social media users are saying about their company, with the additional feature of being able to respond to them. With a new focus in mind, a new process was developed–heightened by the added features and functions of Brand24. The first step is for the tool to search the internet for any posts with the exact phrase ‘Internet of Things’, and the added keyword ‘privacy’. From here, the process is to only search through Twitter posts, and then play around with the keywords. Based on the results previously, some key words could be added in to narrow the outcomes further, or another way is to input excluded words to hopefully specify target users or situations. The next stage of this process is to play around with the added features of the influence slider and the emotion scale. The influence slider allows you to see which tweets or people held the most influence in the search in terms of visits, retweets, comments and likes, while the emotion scale allows you to accumulate positive, negative or the default neutral posts. These extra features could aid the process–as well as the type of results–as I could see whether the tool was accurate in its findings, and get to the point straight away on what were the most popular tweets surrounding the issue. The final stages of the process is to visit the top sites tweeted about to expand my understanding of the issue further, and to revisit the saved search often to view the developments.

Proposed process
Proposed process

Below is a flow chart that demonstrates the process that was actually taken in this web scraping task.

Actual process taken
Actual process undertaken

The process itself along with the Brand24 tool proved to be a good combination. The detailed and generative process that was designed was enhanced through the features and added functions of the web scraper. The combination allowed me to explore within a topic that was both specific but also broad. I could begin with the broad spectrum such as the Internet of Things, and narrow it down by ‘privacy’ keywords. Also, having excluded keywords such as ‘business’, ‘company’ and ‘patient’ allowed the search to zero in on more generalised posts that were hopefully more targeted to the everyday social media user. It was interesting to see what posts were collated when these aspects weren’t included.

The parameters

This exclusion did work, however, I felt that the results were very informative and unemotional. Although this was a very common nature with all of the posts gathered. Furthermore, the influence slider was both an advantage and disadvantaged it turned out. It was an advantage because it could narrow down on the most popular tweets in the search, eliminating a lot of the retweets, however it was also a disadvantage, because as the slider was increased, two things happened: mostly all of the results were of about 5 original posts retweeted multiple times, or some of the less retweeted and original content was eliminated–ultimately, a loss.

Examples of results with a low influence value
Examples of results with a high influence value

As implied previously, a lot of the posts were just statements or the name of the article / document attached to the tweet. Or if they were of an opinion, they were direct retweets of the original opinion. This result became difficult as I was hoping to discover some original posts that game an opinion on the privacy issues. However, these were far too rare and possibly due to either the broader spectrum of data and privacy, or the platform of Twitter as its character limit restrictions. Overall, this facet was a little disappointing.

Examples of the expansive retweeting

In terms of the Brand24 tool, it seems to make the decision of whether the post is positive, negative, or neutral, however, it often gets it wrong. If there is a negatively associated word in a positive post, then it will only judge the post on that word. Or if there is a link in the post, it just generally puts it as a neutral post. The same outcomes occur if the post is a statement and not an opinion. Therefore, the tool gets it wrong a lot of the times, skewing the results because it possibly lacks the human decision-making element.

Negative tweet that's been categorised as neutral
Negative tweet categorised as neutral
Possibly positive tweet categorised as neutral

With these results in mind, there are a few visual design responses that could arise–however strictly initial concepts. Firstly, a response could be a set of posters or a service design that aims to educate and inform users of the lack of or hidden, privacy in the Internet of Things. Along the same line, the response could be a system or service in the IoT, such as an app that acts as a VPN. It could be a new login screen on social media apps to opt-out of the monitoring. Or another response could be a flyer that is in the boxes of new appliances and products to warn people of its connection to the internet or iCloud.

Since this post was so large in content, ideas and data, here are my findings–of the web scraping and the task altogether.

  1. Twitter allows for short posts but this also restricts what a person can say, conveyed through the extensive retweeting occurring.
  2. With such a broad, new and big topic such as the Internet of Things, most of the posts are informative, and rather statement-based.
  3. It is best to search around for a web scraper or tool that works best for you as it could make the process easier.
  4. Even though the process didn’t work the first time around, I kept trying and changing the parameters until I found something that was both interesting and collated reasonable results. Playing around with the parameters meant that different dynamics could be explored.
  5. When working with data and web scrapers, the task doesn’t always go to plan. Computers don’t think like us humans; they don’t see the emotional side.



Featured Image:

Twitter_cover n.d., Theme Expert, Google Images, viewed 12 September 2016, <;

Post 5 -Approached to design for change, design-led ethnography

Since data, privacy and security have been the major issues that I have been investigating, I wanted to find out what people thought of when they are asked about privacy and security, and distinguish an opinion around how online sites might be allowed access to your data.

The first step to developing the probes were the interviews that were carried out with class peers. Starting off with general questions about personal data and online privacy, the questions were framed mostly around whether or not they valued privacy and were concerned with the direction it is heading in the future. With the two interviews that were conducted, there were very opposing views in the issue. While one interviewee was very cautious of how their data is being used, constantly checking privacy settings, and deleting their browser history daily; the other interviewee had a much more laid back position on the issues, happy for companies to have their data when they have nothing to hide, believing that there is worse content out there, and that companies would inevitably get it anyway. As stated before, very opposing views.

With these interviews in mind, the concept for the design or cultural probe was to simplify the questions and really find out how people felt about online privacy, security and anything they would questions themselves. The first part of the probe was a one-time activity of answering four questions. The drive for these were to find out how the users defined privacy and security in their most basic forms. The other questions were more open in the way they questioned whether they value privacy–as some users don’t care–as well a question they have always liked to know the answer to, related to data.

Design probe for data and privacy

In terms of the results from these questions, the two I received back were very much the same in views. They define privacy as freedom, keeping to yourself and not worrying about things being forced out of them, while security they define as the state of feeling safe, protected, free from harm, and not needing to be on the lookout for danger. It is interesting to see that although they have very similar definitions, they context in which they are used is different. Privacy is more of a personal state where we choose the level or lack of privacy, but security is the public state, where it is the environment we are in that defines it. The other questions portrayed more personal views on the topic, with privacy and security highly valued qualities, but not something explicitly sought after. An interesting point that was brought up through this part of the probe however, is the query of how large businesses protect their data and the data of their clients safe–an investigation for another time.

Probe activity about defining privacy and security

The other part of the probe was more of a visually recording and mapping exercise. Throughout the week–or just all in one day–the participants were asked to record with stickers every major site they frequent. With a focus on the concept of online accounts, and different coloured stickers, the participants recorded details such as the follow: whether the site required an account to view the content, whether the account was just suggested for better viewing, if the site didn’t require an account at all, or whether they already had an account with that site. The concept of this part of the probe was to determine the amount of sites on today’s world wide web, and whether this can be linked to the increase of personal and preferential data gathered and stored. The trick was that sites could have multiple stickers if the participants wanted to provide extra information. And just like I thought when developing the probe–or at least this part–most popular or frequent sites visited require an account to view the content. It is unclear as to why they require an account as I didn’t read the term and conditions of any new sites I signed up to, however it seems a plausible conclusion that though the account–and the subsequent terms and conditions accepted–that the company of the site now own any data that you create on the site. This observation would certainly require more research into why accounts first started and why they are used so widely today, however from the probe result it is clear that this could be a possible answer.

Probe activity based around online accounts #1
Probe activity based around online accounts #2

Even though this design probe was somewhat of a success with regards to the answers and results, there were some difficulties both in the development of the probe, and trying to receive them back. The initial concept behind the probe was to dig really deep into the participants’ mind and figure out some tough questions. However, with some of the results from the interviews, in-depth questions or probes seemed too hard to get the users to understand when not being directly instructed face-to-face. Therefore, it was decided that an easier probe would provide more cohesive and accurate results. The next trouble was how do I get designers to participate and interact with an activity when it is speaking about data and the digital space. In keeping it simple, drawing was the first thought but it didn’t match the activity, therefore the visual representation of the stickers provided a simple interaction while recording different types of the same data. Another problem that arose during this probing exercise was actually getting the probes back. I managed to hand out 5 probes in order to get a good range of results and really see a survey of the audience, however I only received two back–even after constant messaging. An improvement on this issue would be to hand the probes out slightly earlier so that I could receive them back first hand. That, and coming up with a simple and effective probe that was suitable for the issue of data, were really the only problems that were encountered. I wouldn’t change much if the exercise were to be repeated, only the method of sending and receiving, and possible the type of task explored. The original plan was to stay away from privacy setting and recording sites that asked for access to your information, however with data this was always going to be a sub root. Although I did manage to spin it in a different way.

Some results and insights from this exercise I already knew or guessed, however some were also quite surprising and interesting to see. These insights can be reduced down into the following five points:

  1. Privacy and security are valued even when not explicitly sourced.

  2. The definitions of privacy and security are similar however they operate in different contexts; privacy is the private sector, whereas security is the public environment.

  3. Most popular or frequently visited sites on today’s web require an account to view their content.

  4. A reason for the overpowering number of sites requiring or suggesting an account could be so that they then own any data that you create or is stored about you.

  5. Participants in design or cultural probes may not act as you expect or want.

Post 3 – Mapping the participants and constructing an image archive

This exercise of creating a visual map of the stakeholders involved in data and online privacy was an excellent way to see who the main stakeholders are, their dominant beliefs towards to the issue, as well as the interaction between some of the stakeholders in terms of their position on the issue and where their beliefs cross.

Political proximity of stakeholders in online data, privacy and security.
Political proximity of stakeholders in online data, privacy and security.
Political proximity of stakeholders in online data, privacy and security.

In terms of the beliefs, the group took that to mean how the stakeholder was using the data–positively or negatively–as well as what their position on the current issue of privacy and security was. Stakeholders such as technology companies and the government are on the positive side for the reason of innovation and technological advance (especially in the context of the Internet of Things). They are for data mining and tracking. Whereas the general users are on negative side as they don’t want their private or personal data and details to be used for targeted advertising or to track their movements. They are scared off by the hackers and hacktivists that are also on opposing sides to the issue. The hackers are on the positive side as they are often the thieves of online content and information, and can use the data mining and tracking to take advantage of users online, whereas hacktivists are o the negative side with the users, and they are against innocent people being taken advantage of. They want to expose the wrong doings of the government and hackers.

The spectrums that were explored throughout the maps were mainly positive and negative views on the issue. Whether that be how they used the data or whether they wanted data mining, storing and tracking or not. Generally, the users were on the side of the hacktivists and not-for-profit organisations, but in terms of their interactions and relationships with other stakeholders, they were alone on the map. On the other side were government bodies, businesses, technological and data companies, and hackers–who are for the tracking and mismatched security online. Basically, if the stakeholder’s positions and beliefs were to be reduced to their simplest form, they would be the following: the private sector on the negative side, and the public sector and related industries on the position side.



Throughout the research process of the issue of data privacy and security, many images have popped up. Below are ten of the images or visuals that either resonated with me, or helped to further develop my understanding of the issue and any related to it.



(Big Data Watchers n.d.)

This image depicts the notion that there are companies out there always watching your online presence and the data that is being collected. It is commenting on the lack of privacy online, especially with large companies targeting their advertising through the data collected about your activity and behaviour. Data collectors are always watching. This is a common idea, visually translated from the information explored in the scholarly and secondary media sources found. This visual–as well as the text sources–is particularly focusing in the context of shopping. Big data is always being collected and sourced.



(Cyber Thief n.d.)

This image depicts the notion of online hackers stealing private r personal data. What isn’t being represented is what data is being stolen or taken illegally (assuming it is done illegally due to the balaclava worn by the protagonist). However, what is being shown is similar to the text sources gathered; that with the internet–and the growing connectivity of devices and technology–there is an increased risk of online hacking and theft, particularly with personal data.



(Online Theft n.d.)

This image depicts the concept of online theft, and how it is just as dangerous and present as theft in the physical form. This particular message wasn’t brought up through the investigated text sources, and seems to be something that isn’t spoken about much. A lot of the time, users and consumers assume that because they have an account, and that they don’t share any of the personal details, that they are safe from online theft and hacking. This is certainly not true, and should be discussed further. This image creates a clear understanding of the issue, and simplifies the issue down to a visual representation–something that the text sources were lacking.



(Internet of Things Explained n.d.)

The Internet of Things can be a difficult concept to grasp, but this image gives a good visual to help develop an understanding. The idea with the Internet of Things is that all devices and appliances will soon be connected to the internet to better our lives, and ultimately create smart homes–the visual represented in the image. What is let out of this image is an explanation of how it will work or how the devices would speak and communicate to each other. The concept of smart homes were a common idea among the scholarly articles as they discussed more technical, practical applications of the Internet of Things, rather than just a brief definition, while the problems and issues were explored in the popular media sites. Even though this is a clear visual representation, it does lack some explanation and clarity.



(Internet of Things Graphic n.d.)

This image also aims to distinguish what the Internet of Things is and what is can do, and possibly does it is an easier way. It doesn’t give a real world example like the previous image did, it more portrays the different avenues that it could be used in, and all the areas that our lives could be affected by it. However, again it lacks some kind of further explanation or visualisation on how we could benefit from it or its daily advantages. This is a common theme through the scholarly articles and the popular media sites, although they tend to go into more detail–which actually provides context and a more developed understanding of the Internet of Things. This graphic is a simplified visualisation, of a complex concept, that needs to be understood further, and explored more.



(Private Property n.d.)

This image depicts a common theme or issue that isn’t always discussed or talked about, but is commonly known and misunderstood. The internet if not private like parts of the real world. In reality, you can lock your doors, draw your blinds, and live in the privacy of your own home. The internet lacks this privacy and security, and as this image suggests, the lock would always be open. I believe this image provides an excellent metaphor for the privacy of personal information on the web. The metaphor of the gate is such a universal idea that it is easily understood. The text sources investigated previously implied this issue, however none explained it or explicitly stated it, or did it quite as clearly.



(Watched by a Crowd n.d.)

This image conveys the common thought that when on the web, everybody is watching you. They are watching your activity, behaviour and shopping preferences; which can make the user anxious when surfing the web. This image demonstrates this concept well, and clearly, by blocking out the watchers so that the focus is on the user. Being watched for different reasons on the web was a common issue discussed in the text sources, especially in the popular media sites. However, even then, it was dominantly spoken about. This image improves on what was discussed.



(Privacy n.d.)

A black and white scribbled drawing doesn’t often show a lot as most images depend on colour. However, this image simply conveys the idea of watching and spying without the use of colour, but through the simple design elements of solid shades and contrast. This image effectively portrays this idea of always being watched online through the small eyes in the computer screen, through the solid black. Unlike some of the other images–or the text articles–the image depicts the fact that people can watch and spy on your personal data without you even being online. It merges the medias of print and digital, and demonstrated a snapshot in the everyday life. Most text sources didn’t exactly explain the ‘not being online’ clause of the issue, which this visual does so cleverly.



(Data and Privacy Spying n.d.)

Like some of the other images, this one demonstrates the common thought that there is always someone on the other side–in a digital context–spying on your personal data and online activities or behaviour. The interesting part of this visual however is the use of the sailor and the telescope, exaggerating the concept of spying. This notion was investigated and explored thoroughly through all the text sources found, however, I feel that like the other images in this collection, the image does the explaining more succinctly. Like the text articles, it demonstrates both sides of the issue; the personal experience and behaviours of the user, and the overbearing and secretive surveillance f the data companies and the government.



(Users have no Privacy n.d.)

What I like about this visual is that, unlike the other images, it shows data privacy, security and data mining from a personal experience or lens. It puts back the human element that the issue is lacking. Here we see the two main competitors for data; Google and Facebook. And like the popular media sites–and some of the scholarly texts–they are looking at every part of us and our online personals. They look for our behaviours, activities, preferences and favourites, in order to sell it to agencies and target advertisements to us. However, unlike the text sources, we are reminded of how invasive this data mining is, and this image helps to bring back the human element and personalise the experience.




Big Data Watchers n.d., Anthill Online, viewed 14 August 2016, <;

Cyber Thief n.d., Google Images, viewed 14 August 2016, <×400.jpg?v=1457102445&gt;

Data and Privacy Spying n.d., Google Images, viewed 14 August 2016, <;

Internet of Things Explained n.d., Google Images, viewed 14 August 2016, <;

Internet of Things Graphic n.d., Google Images, viewed 14 August 2016, <;

Online Theft n.d., Google Images, viewed 14 August 2016, <×282/Computer_Theft.jpg&gt;

Privacy n.d., Google Images, viewed 14 August 2016, <;

Private Property n.d., Google Images, viewed 14 August 2016, <;

Users have no Privacy n.d., Google Images, viewed 14 August 2016, <;

Watched by a Crowd n.d., Google Images, viewed 14 August 2016, <;


Post 4 – Identifying and Collecting a Design Example

As stated in both of my previous blog posts, data is a big concept, and there is a lot of it to try and understand. There are several ways that data can be broken down into pieces to better understand what is happening and what can be taken from it, and one of these ways is through data-driven design, or data visualisation.

There are many studios and individuals in the world today that are working within the emergent practice of data-driven design, transforming complex data and systems into easy to understand–and follow–visualisations.

Two studios–or projects within those studios–took my interest by the type of information that they were dealing with, and the successful and visually interesting outcomes that were delivered.

Firstly, the studio Density Design came onto my radar through the lecture on emergent practices, and the clean way they deal with the Twitter sphere. Somewhat of a generative system, Density Design created a living data visualisation of the incoming tweets about a particular conference they were attending. Displayed behind the speaker, ‘Andromeda’ was to move away from the static, one tweet system, and to “visually represent the social dynamics that arises around a topic”. The focus in this experiment was on the relations between the users; how much they are active and how they interact with the topic and each other.

(Density Design 2012)

With the metaphor of a galaxy or the solar system, the display wall is the space of debate, and the users are elements that move within it. To keep the focus on the interactions between users and the developing topics, the visuals were designed to be non-intrusive; 2D shapes and a single colour palette. Using the program Processing, this data visualisation is an excellent example of displaying data in a clean and easy to understand way. We see the shapes grow and develop if the topic is trending, and see users interacting through the proximity of elements. It appears like it would need a developer who knows some seriously complex code, however the principles behind the concept and execution can be replicated easily enough in a static 2D visualisation.

Another project that caught my eye was more of a personal project. Not interactive and generative like the previous example ‘Andromeda’, ‘Every Day of My Life’ is a static data visualisation of the computer usage of two and a half years by part of the team at Variable. What is immediately striking about this project is that you get a very clear indication of the computer usage over the time period straight away. With each line representing one day there is a lot of time to be covered–913 days to be specific. However, the data has been broken up into very understandable elements; each colourful block is the most foreground app running at that moment, and black periods are when the computer isn’t turned on. Therefore, sleeping patterns and holidays are easily identifiable. The data is even broken down by colour into the type of computer interaction by the creator. Keyboard hits are coloured greeny-yellow, and mouse hits are red. This ads to the complexity of the visualisation, which simplifying the type of data.

(Density Design 2012)

The project, gathered through an app that logs the computer’s usage, can also easily be replicated at a simpler level, by recording times that our computers are on and not, and plotting that over a timeline. ‘Every Day of My Life’ is a very simplistic data visualisation, but it is the range of colours and the fading colours that add to its sophisticated nature.






Density Design 2012, Andromeda,, viewed 18 August 2016, <;

Variable 2012, Every Day of My Life,, viewed 18 August 2016, <;


Post 2 – Exploring data ownership in Big Data and the IoT

Data, privacy and security are large concepts to eat into. Big topics with big questions to follow. So how do you break these down into pieces to nibble on? Continuing on from the last post, I continued to look into secondary news sources to find a smaller issue to bite into. And after stumbling upon the IoT in most data related articles for a week now, I decided to finally dip in and figure out exactly what the IoT is.

The IoT, or Internet of Things, is the new revolution taking over the web–the next internet–where all of our devices will be connected to the internet and be able to talk to each other. A thought that surprisingly didn’t blow my mind like I expected. And all the news articles that I read gave me one question; If everything is connected and producing data, who owns this data, and who decides who owns it?

The first scholarly article that peaked my interest, introduced another new term: HDI or Human-Data Interaction. In the IoT sphere, this term means that all devices will “collect data that is produced by or about people.” (Mashhadi et al. 2014). The authors of this article (Mashhadi, Kawsar and Acer) are all researchers for Bell Labs, the research and development subsidiary of Alcatel-Lucent (a Telecommunications Contractor), and such have a professional background to pull data from. Being scientists and researchers, they provide a very technical standpoint on the IoT, actively writing about its innovations, opportunities, ubiquitous nature, and consequences. Writing mainly for the IoT community, these authors provide an easy-to-read, trustworthy account on the issue. With regards to their position on the IoT and data ownership, their views are similar to most, if not all, IoT authors: who owns the data collected, who should have access to the data, and notion that there needs to be a solution to the issue put into practice soon. In relation to this, the authors pose three models to investigate: the pay-per-use model, the data market model, and the open data model–three models that can be unfolded and discussed another time.

Similar to the previous source, Lara Hirschbeck–a student of the University of Munich–explores privacy and security challenges in the IoT further, with a focus on users and daily devices. Since Hirschbeck is a studying student, initially her opinions and positions didn’t appear ‘valid’ in terms of being a scholarly source. However, through further research into her paper, it is part of a collection of essays or reports from a ‘Human Computer Interaction’ Seminar, given to the Department of Computer Science–thus providing some credibility being reviewed by tutors. As it is a technical report, Hirschbeck delivers more of an informative position on the issue, detailing the advantages and disadvantages of this new communications space. On the other hand, she is certainly on the position that the more data that is collected, the more issues arise, particularly in the way of the privacy of the data and the security of how it is being used. Her personal position–like most other authors investigating the IoT–is that it is up to the users to decide what data they deem to be collected, what data they use, and which risks they are willing to take to get the digital advantages.

On a more balanced note, the last article to pique my interest delved more into both sides of the argument; the advantages and opportunities, versus the consequences and privacy issues. The collection of authors on this article are all actively involved in the IoT and Big Data sectors, and are avid scholarly writers on the topics and issues. Due to this, and their interest in the internet and its related technologies, these authors would certainly have a comprehensive knowledge on Big Data, the IoT and all the privacy concerns that come with these concepts. Even though they are primarily writing for researchers, developers, and peers in the data industry, this article ‘Privacy of Big Data in the Internet of Things’ is written for the middle man–people interested in the field but don’t have an extensive background in computer sciences. What was interesting about this article was their ‘on the fence’ position regarding Big Data and the IoT. On one end, they position themselves on the side of opportunity. That these two concepts can offer great opportunities for learning and innovation, especially with regards to reducing waste and cost, and increasing productivity. However, the other position they hold is that with these great opportunities comes the negativity of regulations that are not sufficient to support “privacy guaranteed data management life cycles” (Perera et al. 2015). In other words, there are regulations at play they can’t support the idea of being guaranteed privacy in a world of vast amounts of data.




Hirschbeck, L. 2015, ‘Can you Trust your Fridge? Privacy and Security Challenges in the Internet of Things Era’, Media Informatics Advanced Seminar ‘Human Computer Interaction in the Internet of Things Era’, pp. 40-47.

Mashhadi, A., Kawsar, F. and Acer, U.G. 2014, ‘Human Data Interaction in IoT: The ownership aspect’, Internet of Things (WF-IoT), 2014 IEEE World Forum, pp. 159-162.

Perera, C., Ranjan, R., Wang, L., Khan, S.U. and Zomaya, A.Y. 2015, ‘Big data privacy in the internet of things era’, IT Professional, vol. 17, no. 3, pp. 32-39.

Header Image:

Anthill Magazine n.d., The Big Data of You: How your online reputation shapes economic outcomes, Anthill Online, viewed 8 August 2016, <;

Post 1 – Delving into data security and privacy with secondary sources

The first article that caught my eye and sparked my mind was of course one about today’s craze of Pokémon Go. Bernard Keane, a Politics edition for the Australian news source Crikey, explores the dangers of Pokémon Go, and creates the idea that it isn’t so much of a problem in the scheme of data. Being a writer and editor for Crikey’s Canberra Press Gallery Correspondent, as well as an active writer in the politics, national security and economic sectors, Keane mostly writes for avid tech fans, and the technology community; a passionate writer in both the physical and digital contexts.

As opinionated as an article can get, this probably takes a medal. The article is quite opinion-based through the tone and the language used, however it is also a well-researched article; providing details about digital protocols and contributing factors to either an impenetrable security or one that is lacking it. A problem, however, is his tendency to go slight off track–introducing driverless cars to the mobile data and privacy discussion. But it is through his opinionated view-points, that his bias about Pokémon and the Internet really take hold. Even though I mostly agree with his standpoint–that users shouldn’t be worried about technology and government companies accessing and using your data from Pokémon Go for their corporate benefit, because you can guarantee that several companies have already accessed that information before them–he comes across a little too carefree, especially when compared to other authors reporting on similar issues. Overall, his point is common across all authors–it’s scary how easily available our data is to others.

On the same wavelength as Keane and his Pokémon argument, Robyn Ironside from the News Corp Australia Network exposes the concern from airlines Virgin and Qantas, about photographing travel documents. Ironside is an active writer in the travel sector for, and typically writes warning articles for future and current travellers, from security issues to serious health and physical dangers. Although she is an avid writer for the travel sector, data privacy and security issues seem to be not her usual topic. This is evident by the numerous direct quotes from primary sources, ultimately blurring the line of whether she is trust worthy or an expert on the issue. Due to the level of primary sources, I find this article to be trust worthy, but view Ironside as more of a conveyor of information as there isn’t much original text in the article.

For what original text there is, there isn’t much of it do determine position on the issue; it is rather just a paraphrasing of the interview. Therefore, this factual and informative piece isn’t bias towards a particular view point. However, this doesn’t mean that there isn’t an argument being made. The notion that personal data is easily taken from digital sources is one that is common among all authors; although Ironside exposes the notion of people’s carelessness as a contributing factor–one that isn’t so common between authors.

On the other side of the ‘wrong doers’ in this data and privacy sphere is the Australian Government. Jessica Longbottom, a reporter and producer for the ABC, is highly regarded in the news and reporting industry. With numerous achievements and accolades under her belt, Longbottom is an active writer that reports for two audiences; the everyday person, as well as the avid news readers who look for facts and facts only.

Even though Longbottom is a devoted author for the ABC, she doesn’t write for a specific topic or issue, and because of this, I couldn’t call her an expert in her field. By the look of her previous articles, it seems that Longbottom writes whatever she is given–topic regardless–mainly surrounding political issues like asylum seekers, education in terms of Teachers, and also in the Arts and Culture sector. To me, Longbottom is a passionate writer with a focus on getting any and all important stories out to the public.

And her passion certainly comes across in the tone and language of the article. Longbottom’s piece on the Census and the Government’s secret storing of details, is very factual and well-researched. However–and it is odd to say this–her opinionated tone is refreshing when regarding the issue. By speaking in the name of ‘everyone’, Longbottom doesn’t present any bias, but speaks from experience, and delivers an article more about the facts than the argument.

However, all in all, the facts presented by Longbottom are common among all political and economic authors commenting on the Census 2016–me included. The people have been lied to in terms of what has been gathered and stored; a common position in the scheme of privacy and data.

Other articles that caught my interest were pieces that commented on the lies and unseen changes made to social media privacy. Everybody uses Facebook, and everyone wants some level of privacy. Alex Hern is a Technology reporter for The Guardian, who actively writes for the Technology section. He is an author that hasn’t written a lot about Facebook, however most of his articles for The Guardian involve stories about privacy, security and their connection or lack thereof to social media. In particular, they belong in the context of hacking, stocks, competitors, flaws and the new technologies about to emerge.

Hern is certainly a passionate writer, involved with all aspects of the news–as seen through his avid ‘spreading of the word’ on social his Twitter page. But what I loved about this article was the fact that he put himself into the story–literally. Hern comments on the fact that Facebook have changed their privacy settings and level of cover without any statement of the change. So what does Hern do? What any of us would do. He goes onto his Facebook page and checks his level of cover and settings. And funnily enough, everything is now public, even if it suggests otherwise. Hern creates a very opinion-based article with some factual elements, however though his language and almost conversational tone, he presents a heavy bias towards Facebook and their ‘unseen’ wrong-doing. Nonetheless, his position is certainly a common theme in the social media circles, and his argument that it is too easy for their changes to go unseen is mutual among all authors.

Similar to this story is that of the app WhatsApp–a messaging application that is in the bad books for not completely deleting conversations like users would expect. The author, Edward Moyer, is an Associate Editor and News Reporter for CNet and other news and media sites. Except that he doesn’t always write for the same outlet or about the same issues. Moyer is a passionate author in the Technology and Security sectors, and often covers stories connected to technology, security, tech culture, industrial tech, the internet, and digital media, and regularly continues to ‘spread the word’ through his personal social media. Due to his variety of interests within the technology and security industries, he covers a lot of stories about issues with online privacy. Up to this point, he hasn’t covered much in the way of the privacy or security of applications, or issues relative to the ideas of data collection, and more about big technology brands and flaws in their programs.

However, the article does seem well-researched. Speaking mainly from the one source–the security researcher and their investigation–and his own personal opinions, the article then takes on more of an informative role. Along with this, Moyer doesn’t necessarily provide us with any bias, but leaves us with quite a negative perception of the issue. His language–whether it be informative or opinion-based–conveys a negativity towards the security and privacy of this application. And I agree. I wouldn’t want my ‘deleted messages’ to still be in the public sphere, and it seems other authors commenting on the privacy and security of social media wouldn’t either.

And like data in our growing digital world, there is always more to know, more to understand, and more to question. So where do these articles bring me? What do I find interesting? What have observed is lacking research? Three positions that interest me from here are the following: 1. The good that is being done in the privacy and security sector–the advantages of big data. 2. How are the Government or companies using the data that they are collecting from us, and what happens to it or why do they collect it? And finally, 3. Who owns this data being collected? Is it the people because it is made by us for us? Or is it the Government and companies that look to reap the benefit of our data?

So many questions for so much data.




Hern, A. 2016, ‘Facebook is chipping away at privacy – and my profile has been exposed’, The Guardian, 29 June, viewed 29 August 2016, <>

Ironside, R. 2016, ‘Why hi-tech boarding passes and public Wi-Fi access points are security risks for travelers’,, 29 July, viewed 30 August 2016, <>

Keane, B. 2016, ‘Don’t worry about Pokemon Go, you’re already in the panopticon’, Crikey, 13 July, viewed 30 August 2016, <>

Longbottom, J. 2016, ‘Census 2016: Privacy advocates say people’s names should not be retained’, ABC News, 22 July, viewed 31 August 2016, <>

Moyer, E. 2016, ‘WhatsApp chats leave a record even after deletion, says security researcher’, CNet, 30 July, viewed 28 August 2016, <>


Header Image:

Peoples Bank n.d., Online Security, Google Imager search, Peoples Bank, viewed 12 August 2016, <>