Citer
Imprimer
Partager

Voice assistants will have to build trust before we’re comfortable with them tracking us

  • Résumé
    We’re all used to targeted advertisements on the internet. But the introduction of voice assistants like Apple’s Siri and Google Assistant mean that companies are capturing all new kinds of data on us, and could build much more detailed “behaviour profiles” with which to target us. There is already a lot of scaremongering and pushback, as there was with targeted online advertising. But over time consumers have come to not only accept targeted advertising and personalisation, but to see it as valuable. When advertising is relevant to our interests and needs, we have the opportunity to discover new brands and products. This is a win for both consumers and brands.
    Citation : KELLY, L., & LETHEREN, K. (Avr 2018). Voice assistants will have to build trust before we’re comfortable with them tracking us. Management et Datascience, 2(2). https://management-datascience.org/articles/3954/.
    Les auteurs : 
    • Louise KELLY
      - Queensland University of Technology
    • Kate LETHEREN
      - Queensland University of Technology
    Copyright : © 2018 les auteurs. Publication sous licence Creative Commons CC BY-ND.
    Liens d'intérêts : 
    Financement : 
    Texte complet

    A behavioural profile is a summary of a consumer’s preferences and interests based on their online behaviour. Google, Facebook and other platforms use this personalised data and activities to target advertising.
    Currently these profiles are built using data on search and internet activity, what device you are using, as well as data from our photos and stated preferences on things like movies and music (among many other things).
    But voice adds a whole other dimension to the kind of data that can be collected – our voice assistants could pick up conversations, know who is home, what time we cook dinner, and even our personalities through how we ask questions and what we ask about.
    However, Google says its virtual assistant only listens for specific words (such as “ok Google”) and that you can delete any recordings afterwards.
    Remarkably, many young consumers evidently once believed that their information wasn’t being used to target advertising at all. This 2010 study showed that even though young people knew all about tracking and social media, they were still amazed at the thought of their information being used.
    The people in the study thought that if their accounts were on private then no one else had access to their information.

    Targeting, good or bad?

    Most of us are not fully aware of how and when our data are being collected, and we rarely bother to read privacy policies before we sign up to a new platform.
    Research shows that we find the personalisation of our services and advertisements valuable, although some experts suggest that companies aren’t really using the full extent of targeting capabilities, for fear of “over personalising” the messages and customers responding negatively.
    However, many of us have had the experience of having a conversation about a product or brand, only to be served up an ad for that product or brand a short time later. Some people fear that the microphones are always listening, although it is likely a coincidence.
    There is even a theory in academia called Baader-Meinhof phenomenon. This is when you become aware of a brand or product and all of a sudden you start to notice that brand around you, for example in the ads. This is similar to the way that once you are in the market for a new red car, all you seem to see are shiny red cars on the road.
    Baader-Meinhof theory or not, the reality is that the shift towards voice-activated search brings the potential for this information to form part of your behavioural profile. After all, if the speakers know more about you, they can cater to your needs more seamlessly than ever before.
    Will we accept this use of data as readily as we accepted our online information being used to target us? Or is this new technology going to inflame our privacy concerns?
    Online privacy concerns are influenced by consumers’ ability to control their information and also their perception of vulnerability. Some researchers have theorised that because speakers seem human, they need to build trust like a human would – through time and self-disclosure.
    However, for many of us the benefits and rewards such as finding information in a quick and convenient manner far outweigh potential privacy concerns that result from their personal data being used.
    What could be more convenient or comfortable than calling out to an all-knowing omnipresent “someone”, in the same way you might ask a quick question of your spouse or flatmate?
    At the moment, these technologies are still novel enough that we notice them (for instance, when Alexa suddenly started “cackling” last week). But after some time, perhaps we will come to take this personalisation for granted, always expecting ads to be targeted to us based on what we want right now.
    What it comes down to is that brands need to build trust by being transparent about how they collect data. If consumers are unsure of how that data was collected and used they are likely to reject the personalised content.

    La version originale de cet article a été publiée sur The Conversation.

  • Évaluation globale
    (Pas d'évaluation)

    (Il n'y a pas encore d'évaluation.)

    (Il n'y a pas encore de commentaire.)

    • Aucune ressource disponible.
    © 2024 - Management & Data Science. All rights reserved.