Technology, Privacy & Security III

Technology continues to advance at a very quick speed. It’s guessed that every twelve to eighteen months, computers double their abilities, and so do the information technologies that use them.

That’s a good thing, right? No? Some people think that information technology is leading us towards a “dystopia” — a terrible, scary world.

Police around the world are using Facial Recognition technology to find criminals. This technology allows them to find out where a person is, where a person has been, and with whom a person has been meeting. They hope to not only catch criminals after they do a crime, but stop them before the crime happens. Many people say that it’s okay because the police are only watching bad people. Good people have nothing to worry about. Is this true?

It’s not only police that are using technology to follow us and carefully study our behavior, of course. In fact, Facial Recognition is just the “tip of the iceberg.” The issue is much, much bigger than that.

Social media is a big part of our lives, now. As we saw with Brexit, it is being used in dishonest ways to affect how people vote. Let’s read this article from the VOA website about the recent election in India: In India, Fake News Floods Social Media as Voting Begins

How is this happening? Why do social media companies allow this to happen? Could it be that the companies themselves don’t know how to control their own technology? Watch this TED vide and think about these questions.

Watch the following TED talk with the best subtitles for you. Then answer the questions below:

What is her main argument? What evidence does she give? What does she mean by, “…we are the product that’s being sold”? What solutions does she suggest? How convincing is her argument? Does her argument change how you feel about media like Facebook or YouTube? Why are why not? Leave your reply below.

12 thoughts on “Technology, Privacy & Security III

  1. Today information overflowing in internet . so internet feeding many kind of contents. almost contents is free but free contents have advertisement . so company can colect data of personal preferences. and in addition that too many pubulics data. so campany many customar data . and make optimizated advertisement. so pubulic are under impression that inernet contents is all free. but company suck up customar data ,and using for marketing.company have many pubulic data so campany have to abuse data. and pubulic shoulud feel “taking my personal information”.

    Like

  2. She argues that SNS will be able to collect and store our information and that they will be able to quietly monitor and coordinate us once this fact leads to serious problems.
    Based on his claims, he presented evidence of such problems based on surveys conducted by SNS and Propropika, including Facebook and YouTube. YouTube, for example, can automatically play the same category of video if you watch one video and see a lot of things. In other words, YouTube’s algorithms analyze, inform, and store us. Propropika research also shows that when you make a voting advertisement, they can adjust people to vote for more people despite the small manipulation of the picture.

    “We are a product that is being sold” means that we are adjusting ourselves through SNS, hiding our inner facts, and we are being used by them without knowing anything.
    So what we have to face from now on is the structural challenge to the transparency of proprietary algorithms and the opacity of machine learning, and the response to all this indiscriminate gathering of information.

    Her argument is based on what we can feel in real life. So I think I could feel sympathy and alert throughout the listening. But everyone knows that. A more concrete alternative would have been a more persuasive lecture. I am now using SNS, her arguments and grounds have certainly been alerted. But we don’t have a specific alternative to how we should use SNS, so I’m not sure if it will bring a big change to my use of SNS.

    Like

  3. AI’s threat. And AI’s ability give effect.First,advertisement is different for taste, but this technology will develop , and technology’s company can controll us about many topics.
    Facebook,Google Amazon, Alibaba etc ,they are free.
    It is mean that we are buy ourselfe. Our information.
    We can not solve easy, becouse this probrem’s solving is all change from fundamental.

    strong, because her story has many concleate example.
    Change. Because her’s story is horrible.

    Like

  4. Main argument that Zeynep made was AI algorithm operate in the way that control us without letting us know who controls. Perhaps people in power secretly control us and lead us to the type which they want us to be. She insisted this argument with the evidence that existing companies like google and Amazon already analyze our shopping preference or thought or the words we searched in engine, and determine what categories we are in , to control us to make next consumption. Also the experiment in Facebook relating to voting in US was another evidence that shows AI works for humans political behavior.
    Speaking about social media like Facebook and YouTube,their algorithm can be used in any way for example,political decisions, thought, lifestyle etc…
    She mentioned that “…we are the product that’s being sold” implying that our information is categorized and analyzed to be used for marketing.
    I try not to show much information on the social media. I only use Facebook to connect with friends around the world. I would say those social media has created the of make relationships with others easily and I appreciate that. However I still give tons of personal information to someone I don’t know through the chat or pictures. I am scared of it now.

    Like

  5. When we use social media,those sites get to know about ourselves;even what we typed and erased.Thaere are algorithms and those effect our thoughts and life.Algorithms effect to change our mind and get to know about politic thinking.But these algorithms know neither the value of our lives northe difference between politics and others. So make us to use many other ads and etc on social media. Therefore we have to see them with open eye. This made me remind how unbelievable posts are there on social media such as facebook ,instagram and others.

    Like

  6. What is her main argument?
    The power artificial intelligence have is that people’s movements and thinking are controlled by machine learning methods.

    What evidence does she give?
    No matter how we have strong will like I don’t have to buy that things, its strong will may be changed by artificial intelligence. For example,in the supermarket,there is snacks children love Next to the cash register.
    But we can’t understand how machine learning methods control us in difficult ways.

    What does she mean by, “…we are the product that’s being sold”?
    we can’t understand how machine learning methods control us in difficult ways. So we do just as artificial intelligence says.

    What solutions does she suggest?
    We have to face and try to deal with the lack of transparency created by the proprietary algorithms,the structural challenge of machines learns opacity,all this indiscriminate data that’s being collected about us.

    How convincing is her argument?
    We need to restructure the whole way our digital technology operates.
    Now, we have to mobilize our technology,our creativity and our politics so that we can build artificial intelligence that support us in our human goals but that is also constrained by our human values.

    Does her argument change how you feel about media like Facebook or YouTube? Why or why not?
    I usually use YouTube but there is also machine learning methods in artificial intelligence. I sometimes spend many times to watch next videos.
    I convinced that I’m a slave of computer.
    I think machine learning methods are good technology, but they can move many people like a board game.
    I will use YouTube from now on,but don’t forget I’m controlled by artificial intelligence.

    Like

  7. The ads we casually see on the web and in apps are formed by algorithms that are invisible to us. The algorithm is not artificial and has no emotion. If the government seizes this, it may manipulate it without noticing it. We must be able to control this opaque technology.
    When you watch a video of a certain category on YouTube, a video related to that category appears as the “next video”. Also, in an experiment that Facebook did during the election, there were hundreds of thousands more people going to vote after posting.
    We have to artificially control this algorithm that we cannot realize . This should be tackled by individuals, companies and governments.
    I think her argument is very persuasive.
    Yes, I do. In the first place, I thought that there might be such a thing, and I caught it in my ears. Still more, I understood better than before after listening to her discussion.

    Like

  8. What is her main argument?
    What We need to fear most is the people in power will use artificial intelligence to control us.Many technologies gather our informations.

    What evidence does she give?
    In the digital world,learning algorithms can learn to understand the characteristics of people and they learn how to apply this to new people. The complex algorithms work.

    What does she mean by, “…we are the product that’s being sold”?
    It means that our information is sale to the highest-digging authoritarian or demagogue.

    What solutions does she suggest?
    We need to restructure the whole way our digital technology operates.Everything from the way technology is developed to the way the incentives,economic and otherwise,are built into the system.

    How convincing is her argument?
    This is strong.Because, she gave a concrete example to tell us what might happen.

    Does her argument change how you feel about media like Facebook or YouTube? Why are why not?
    Yes.I think I’ll be a little hesitate to use these.It is scary that my information is being analyzed from my history.

    Like

  9. ・What is her main argument?
    People are affected by the AI system, algorithm. And also, we may be building a dystopia by using the AI.
    ・What evidence does she give?
    The evidence which she was given was online advertisements. When we are watching the video on Youtube, it starts with to automotive regeneration system. One video, two videos, three videos, and…, forever we will able to get the next video which I have never seen. As a characteristic, it’s video is related to my favorite things. For example, if I liked the baseball superplay videos, I would get so many of its videos.
    ・What does she mean by, “…we are the product that’s being sold”?
    We study AI systems more. So I understand how its system are moving around us. The structure organizes our actions what I do or not. These systems are used for free.
    ・What solutions does she suggest?
    We don’t need to lose the AI system. Rather, we should study it more how it is moving and what it is effective. We think seriously the possibility of the AI.
    ・How convincing is her argument?
    I think it is good to point. Of cause, we don’t know the AI system. Because the world is always changing quickly. People can’t keep up with the changing.
    ・Does her argument change how you feel about media like Facebook or YouTube? Why are why not?
    No. Its system is very useable things. It isn’t easy to change the world. Because so many people already use it.

    Like

  10. The algorithm is control and manipulation our life. It’s not just aiming at some group but individuals.
    In 2016, she want to write something about Trump so she searched one of the Trump’s speech. And then YouTube started to recommendation another video about white supremacy. The playlist from YouTube become more and more extreme.
    She said “we are the product that’s being sold” means as we seem or searched something than algorithms will analysis these commodities to recommend us more as we looking for. And algorithms can take our search records to buy act’s to earn the reading rate.
    The best way to avoid this technology is destroy the technology and reset a new Industrial structure.
    For example, Facebook was 2012 pushed a act to user and it just need to plus the picture about your friends than voters would increase.
    I really like use YouTube, even everyday need to see it. Undeniably I always feel scared of the YouTube algorithms but also I can’t stop to do it. I can feel that Internet is controlling my life and I can’t leave without Internet . But it also told me is a warning about Internet and anything before I want to do then I will think about it.

    Like

  11.  She argued that in order to prevent the vast amount of personal information collected through machine learning algorithms from being used by those in power to monitor, judge, manipulate, and censor people, there is a need for a structural challenge against indiscriminate and opaque collection of information from machine learning algorithms.

     As a first example, Facebook said it has an unimaginable amount of data about an individual’s identity. Status displays, messenger conversations, every place you logged in to, every photo you uploaded, and even deleted content are also data stored. In addition, such companies buy a lot of personal information from data dealers, and the data range from financial information to search records.
     As a second example, YouTube and Facebook store data from people playing videos, updated postings, and pages that individuals follow, and let learned algorithms prioritize and arrange posts that reflect people’s tastes for a long time. This type of algorithm is currently affecting us a lot. Typically, it can affect people’s political behavior as well. For example, in the 2010 central election, Facebook’s once-exposed posting entitled “Today is electronic day” led to an additional 340,000 people voting.

     So, It is very easy to infer the religion, political orientation, individual characteristics, intelligence, happiness, drug use, parents’ divorce, gender and sexual orientation for the algorithm. She said that if those in power use these algorithms to monitor, judge and manipulate people quietly, especially if they identify, manipulate and censor individual vulnerabilities, the result could be something no one knows. It is said that we may not even be aware of the fact that we exist in this dictatorial monitoring system. In a structure that is organizing, controlling and analyzing people’s functions through the indiscriminate collection of overly detailed and vast personal information, we are no different from the objects that are traded free by the invisible power elite.

     She said that what we should do in the face of this situation is a structural challenge to the opaque nature of machine learning and to counter all this indiscriminate gathering of information. And, she also said, “Intelligence technology should be a tool to support human beings for human purposes, but it should not go beyond the value of human beings, so we need to take a deep look at how the system we depend on is doing our job.”

     She detailed many examples of how companies use artificial intelligence and the concepts of algorithm systems to explain how the analysis, reasoning and learning system that algorithms are targeting people is affecting our personal safety and spirit, and that go to a society where this development of science and technology ultimately be monitored by invisible power elite. However, it is thought that more fact-based examples are needed for the argument that vast and detailed personal information collected through algorithms will necessarily be abused by a few entrepreneurs and powerful people. I also thought she needed to talk about a more specific alternative to combat this reckless intelligence gathering problem.

     I deeply realized the problem of the ill effects of the algorithm system and the concerns of abuse, which I thought were simply convenient. However, now that these systems are already stuck, and because SNS is already being used in a wide range of areas that make up our lives, we cannot stop or modify them to the extreme, but we think we need to delve into the prescriptive purposes of the system, such as artificial intelligence we create and use.

    Like

  12. What is her main argument?
    The essence of the problem is the unclear structure and the business method, not the intention of engineers.

    What evidence does she give?
    She suppose that we don’t completely know the system of algorithm and how AI analyze us.
    and also we don’t know the immense danger that power watch and control us in secret,

    What solutions does she suggest?
    She suggest the solution is that we should consider how this system which we are dependent on works.

    How convincing is her argument?
    I was convinced by her argument. I’m sure that her viewpoint is very important.

    Does her argument change how you feel about media like Facebook or YouTube? Why or why not?
    Yes, dose it. It changed my mind toward media. Because I think large companies that affect the world, such as Google, are manipulating us to target each individual and bias their thoughts.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.