{"id":172,"date":"2019-02-14T21:03:37","date_gmt":"2019-02-14T21:03:37","guid":{"rendered":"https:\/\/www.happyfinish.com\/wordpress\/?p=172"},"modified":"2023-04-25T12:30:59","modified_gmt":"2023-04-25T12:30:59","slug":"ai-ar-future-personal-raconteur","status":"publish","type":"post","link":"https:\/\/happyfinish.com\/ai-ar-future-personal-raconteur\/","title":{"rendered":"AI and AR: Your Future Personal Raconteur?"},"content":{"rendered":"\n

An AI driven technology really shocked me recently<\/strong><\/p>\n\n\n\n

I joined my 7-year-old daughter, India, in the living room as she said \u2018Hey Google\u2019 \u2013 something she does often without following up with a question. It\u2019s like she is just saying hi to a school friend as they pass by. Our Google Home responded \u2018Namaste, India\u2019. My daughter giggled and told me that means hello in Hindu.<\/p>\n\n\n\n

The link between the greeting and my daughter\u2019s name could well have just been a coincidence, but it struck me that this connected mic may well learn a vast amount about its owner or owners, and eventually tailor its response not just to answer questions in a factual way, but to provoke a broader emotional response. In this case, it was a connection which felt a tad deeper than the norm between a human and a machine or software. Like a human-on-human interaction.<\/p>\n\n\n\n

Arguably humans build emotional connections with devices and software regularly \u2013 addiction to checking a phone or the joy that some find in making progress in the newest AAA game for example. But this felt different. In this instance the connection was not borne out of the hand of a human, but that the machine itself made the decision to use the word Namaste at that time based on its learnt understanding of Indy.<\/p>\n\n\n\n

“BY 2022, YOUR PERSONAL DEVICE WILL KNOW MORE ABOUT YOUR EMOTIONAL STATE THAN YOUR FAMILY”<\/p><\/blockquote><\/figure>\n\n\n\n

Intrigued, I started looking deeper into how machines are being trained to understand how humans interact and react emotionally with one another and with the machines themselves.<\/strong><\/p>\n\n\n\n

It seems there is a lot going on. Annette Zimmermann, a VP of research for Gartner, recently said: \u201cBy 2022, your personal device will know more about your emotional state than your own family.\u201d<\/a> So, what does that mean if it\u2019s to hold true? I\u2019ve spent the last 10+ years working in the world of content creation, where generally my goal and the goal of the teams around me is to trigger some sort of emotion from our audiences. And when its content that\u2019s being funded by a brand, we want that emotional reaction to trigger some sort of behaviour; like buy more Product X, or at the least remember Brand Y. When it\u2019s content that\u2019s being created for art\u2019s sake, the goal is still to provoke a reaction. To tell some sort of story.<\/p>\n\n\n\n

So, when machines will eventually know us better than we know each other, or indeed ourselves, what\u2019s to stop them from creating the perfect content to provoke any or all reactions that may be desired in the audience? Not a huge amount it seems.<\/p>\n\n\n\n

For some context, as humans, we are constantly analysing and making best guesses on the emotions of others. Much of that happens unconsciously. We deftly change how we interact with people based on some determination of the vibe and mood we detect in individual and the situation. But very soon, machines may be much better at being a social chameleon than us.<\/p>\n\n\n\n

The ability of machines to use more than simply facial expression or what we might say and how we say it (or type it) to deduce emotional state is already here. Ohio State University recently used a method that humans automatically follow in order to read emotions with a computer vision based AI model. After training, the model in the researcher’s tests proved better at detecting emotional state in human subjects than fellow humans \u2013 successfully determining happiness 90% of the time, anger 80%, sadness 75%, and fear 70% by analysing small changes in facial skin colour. Apparently, that\u2019s something that we innately do to help us determine whether the person we are engaging with is angry, sad, happy, bored, embarrassed etc\u2026<\/p>\n\n\n\n

But this stuff is embryonic \u2013 If we really want to approach human-level AI, it\u2019s a whole other ball-game than a discreet, research-driven test. We need long-term investments, and deep collaboration between academia and corporations to make strides ahead.<\/p>\n\n\n\n

What the Ohio State research and other similar projects show is that machines have proven ability to grasp patterns in human behaviour \u2013 the next level is that a machine will understand the causality between the detected behaviours and the stimuli that has triggered them. This is a big task.<\/p>\n\n\n\n

If we do eventually find ourselves with a good causal model of the world we are dealing with, we can generalize even in unfamiliar situations. This is crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don\u2019t have these causal models. Yet.<\/p>\n\n\n\n

“AN AI MODEL PROVED BETTER AT DETECTING EMOTIONAL STATE IN HUMAN SUBJECTS THAN FELLOW HUMANS”<\/p><\/blockquote><\/figure>\n\n\n\n

There is no doubt that there are some incredible developments ahead in terms of machines better understanding human beings, and in real time and adapting how they present information to better engage us.<\/p>\n\n\n\n

My creative colleagues freak a little when discussing how machines and software may evolve to understand emotions and manipulate them. It\u2019s understandable, any new advancements bring a certain trepidation\u2026 At the advent of the written word and it becoming a popular means to store information, people were genuinely fearful we would lose our ability to memorise information \u2013 seems crazy now as we can probably all agree without the written word we humans would be a hell of a lot dumber.<\/p>\n\n\n\n

It\u2019s also understandable when you come to think of \u2018world-changing\u2019 rhetoric generally spouted by well-meaning, yet attention-loving futurists on how AI and automation will spell doom for the majority of our jobs or will create some sort of utopian future where we all have exactly what we need. Both may well end up to be true, but then again they may not. There is a long way to go and many incremental steps in between before that is a potential reality. In the most part, futurists are making best guesses based on their current expertise.<\/p>\n\n\n\n

With the arrival of colour TV, the Royal College of Art chief David Sarnoff waxed lyrical about how the innovation would allow people to view fine art in all its technicolour glory at home. Of course, you could do that, but nobody really does. That haughty ideal wasn\u2019t what TV ended up being about. In fact, we binge watch series and a mega industry has been built around filling the bits in between content with ads. The commercial market pressures have shaped what TV is today, and that\u2019s taken time, happened incrementally and is still evolving. Surely it will be similar with AI and machine learning. The shape of things to come will be based on market pressures over time.<\/p>\n\n\n\n

So people, stop freaking out about losing your job, for now at least \u2013 in the short term machines that know how we feel and think might not be bad news at all, and if you work in content creation and advertising or marketing they may well help you keep your job. New York State University research teams recently seemed to validate that <\/a>when the parts of your brain that deal with emotion are stimulated, your memory also improves around any information you are exposed to within 30 minutes of that emotional stimuli. Potentially very powerful in improving education and training. Also very pertinent in the world of creative content production: it\u2019s what we set out to do. If an AI can help us nail our creative content to better land our message, I\u2019m all for it \u2013 as would all of our brand clients, I\u2019m sure.<\/p>\n\n\n\n

“MACHINES WILL GET VERY GOOD AT READING OUR EMOTIONAL STATE AND DELIVERING CONTENT THAT WILL MANIPULATE US TO RESPOND”<\/p><\/blockquote><\/figure>\n\n\n\n

So here goes my own futurism\/educated guesswork. Machines will get very good at reading our emotional state and delivering content that will manipulate us to respond in ways that they, we or others desire.<\/p>\n\n\n\n

All very interesting (I hope you are thinking). But, emotions change pretty rapidly. They do in my home at least. We go from side-splitting laughter to all-out warfare in a split second at times. So in order for an AI-driven content creator to be able to deliver content to my family that was relevant to our ever mood-swinging emotional state it would need to be able to adapt in real-time. Well, there are some mind-blowing developments in the machine learning and AI driven real-time generation of content happening too.<\/p>\n\n\n\n

We are seeing developments where unsupervised models are able to turn video of day to night, or of a snowy scene to a sunny scene without human involvement, thanks to Nvidia.<\/p>\n\n\n\n

\n