daan weijers


An exploration on how we can redesign behaviour around media-capturing and reliving. Supporting research at the TU/e.


Spring 2015


3 months


UX & UI, Electronics


Mendel Broekhuijsen


Now our cameras are in our pockets, we capture countless gigabytes of media content. We rarely take the time to look back, though. This project explores a reliving strategy for digital photographs, as part of research at the TU/e


Concept video

Opico is an application that helps people tell better stories around the pictures they take, while at the same time strengthening their memories of the moment the picture was taken.

According to scientific research, people actually tell better stories without visual cues, which is why Opico provides an easy to use interface that allows someone to obfuscate certain parts of a picture.

Opico comes with a digital photo frame the picture is transferred to, after editing. In this way it allows for the media to become a conversation piece in someone’s home.


“why is it that we capture more and more media, but barely look back?” This was the question asked by the ”Pics or it didn’t happen” project at the faculty of Industrial Design at the TU/e, supporting research regarding the topic.


What do we take pictures of?

As a starting point of the project, I tried to understand the basics of the human memory, as well as the reason why we share media with our friends, relatives and others.

What we share

From a research paper published in 2009 — “A Study of Motivations for Mobile Media Sharing” by Dion Hoe-lian Goh et al., — when smartphones were just coming up almost 99% of the media shared with family and friends consisted of pictures, just over 1% video and under 1% audio.

This had several reasons, such as the fact that storage space on the devices was (still) very limited and data-bundles were relatively small. All of these things improved, so to get a better view of reality I analyses some friends’ data usage, and estimated that — in 2015 — the ratio would probably be closer to 65% images, 25% and 10% audio (of course heavily depending on your geographical location and preference of applications).

Why we share

The same paper explained why we share. The most important reason was to Create and maintain social relationships, broken down into: Sharing common ground, Interest of recipient, Connecting with loved ones and Telling stories.

Human Memory

Some basic information on human memory I found in many papers was that the more often you tell a memory to someone, the 'deeper' the path in your brain towards that memory becomes, and therefore the clearer the image in your own head. For this reason, the part I was interested in most was having people tell stories about the pictures. Make the pictures meaningful again.

The user group I defined at the start of the project is young adults, aged 18–25. People who have grown up with phones being in their pockets and have affinity with downloading and using apps for making and editing photos. People who are in their young years, making trips and memories on a regular basis and eager to share these experiences with people around them. They often do this through social media, where most meaning behind them is often lost.

It is this group that I wanted to engage back into telling the story behind their pictures.


“This research suggests, first, that providing [visual] cues as part of an augmented memory system may hamper the level of detail of autobiographical memories, and/ or, second, that cues contain information that people may think is obvious and therefore might not want to repeat in their memory descriptions.” – Hoven, E. and Eggen, B. 2009

This excerpt from a different paper I read inspired me a lot. I found it very funny that a visual cue actually impairs a person’s ability to tell a story around a picture, and wondered what impairing a person’s ability to see the picture would do to the story.

the process of image editing. Original image top left, desired effect bottom right

This is when I left my desk and recruited 10 first-year students (average age 20) to send me pictures (of a study-excursion they just had). With half of the respondents I would pixelate a part of the picture, prior to an interview with them, the other half received their original picture. In the interview I asked them to describe what was on the picture. The results were that — on average — the pixelated group was talking almost twice as long about their picture as the other group.

No Photoshop

Version one and two of the user interface

For this to work for people without Adobe Photoshop, I had to develop an application that would empower users to create the effect themselves. I started with a quick printed prototype of a possible user interface, asking the user if he/she can perform certain tasks, for example drawing the areas to blur.

After receiving some feedback on the positioning of the elements and (of course) the graphical style, I made a second version of the user interface. This version had a more modern visual style and a bigger area to draw on with more precision. It also allowed the user with more control over the effect.


Technical overview of the system

I did not want to end my project with just visuals of an interface and thus I challenged myself to step up my game in JavaScript. I developed a complete working prototype of the application in HTML, CSS and JavaScript.

This allowed the user to upload their own images, edit them as they pleased with ease and freedom before sending them to the digital photo-frame.

Physical Prototype

initial sketches of the physical photo-frame

In order for the pictures to become the conversation piece I wanted them to become in the life of the user, they needed to get off the phone and into the ‘real’ world.

Not willing to change the digital format of the photography taken with smartphones, I believed a physical photo-frame would be the best of both worlds. Allowing for change over time, as the nature of the file format inherently possesses, but also allowing it to be a more central piece in someone’s home or room.

I prototyped the photo-frame by putting an iPad Mini inside a custom made wooden box. This also allowed me to easily connect the frame to Wi-Fi, allowing for a web-socket connection with the phone’s app prototype.

Development of the photo-frame

I also included a button on the side, that was connected to an Arduino, so that if the user really wanted to he or she was able to temporarily disable the obfuscating effect by pressing and holding it.

The finished photo-frame in a home-setting

Role in Team

This was my thesis project for my BSc. degree in university. The project was entirely done by me.


← Back to my work