24-Hour Social Media Use and Data Collection

6:40 a.m. Instagram

I woke up and sent a direct message over Instagram and scrolled through suggested reels while lying in bed. Instagram uses log in credentials that include my name and email address. These identifiers are unique and persistent and can be used to link my online activity and third-party data back to me.

Instagram also collects data on which posts and reels I stop scrolling to watch and interact with, which ads I click on, and what I search for. It suggests posts and reels based on this information. In addition, it knows what kind of device I’m connecting from, and my location based on the location tags I use and searches for locations i.e. Lake Tahoe and Sacramento.

6:50 a.m. Facebook Messenger

I answered a call from a friend on Facebook Messenger that lasted approximately one hour. While I don’t have the Facebook app downloaded on my phone, my account on Messenger is connected to my Facebook account. Messenger uses the same unique identifiers as log in credentials as Facebook and has access to the same information used to set up my profile including email address, name, date of birth, and location. It also collects data on my calls, messages, mac address, IMSI and IMEI numbers, advertising id, and IP address. It also has permission to use my microphone and camera including photos and files stored on my device.

12:32 p.m. Garmin Watch

At 12:32 my Garmin watch sent me a reminder to relax because my stress level was high. Garmin measures stress as low medium or high based on data it collects on heart rate variable. As the largest companies in the technology sector continue to collect data on over 80% of measured web traffic this article details How Garmin survived the GPS revolution despite Apple and Google. While Garmin Ltd. is a publicly traded company that has not been acquired unlike its competitor Fitbit (acquired by Google in 2019), Google and other countless first and third-party trackers may be accessing health data recorded by the fitness watch and stored on the Garmin Connect website.

This data including step count, sleep quality, heart rate variable, connections (contacts), and many other health and wellness insights can potentially be collected and stored by third-party tracking cookies when the whether the website is accessed through a web browser or when the Garmin Connect app is in use.

4:00 p.m. JustWatch and Instagram

JustWatch is a streaming guide for movies and TV shows that I often use to find out where I can watch movies online for free. I sent this link Terrifier 2 streaming: where to watch movie online? (justwatch.com) over Instagram Messenger so that my friend and I could stream a movie together after work. I noticed how JustWatch automatically selects do not sell my personal information which can be toggled on by the slider.

However, when a link is sent through Facebook and Instagram Messenger the link is opened and downloaded by servers and potentially stored and sold. This is different than JustWatch selling information which is freely collected and stored by external servers according to this article on why you should stop sending links over messenger on Forbes. The data collected not only can be directly linked to me as a unique user but data about the movie genres I’m interested in and the websites I frequent may also be collected and stored.

8:40 p.m. JustWatch and Instagram

I watched the movie with my friend while on an Instagram call. Instagram can potentially access the content of calls including the duration, time, and user profiles.

Reflection

It’s clear that much of the data collected, stored, bought, sold, and the inferences trackers make about is largely out of anyone’s knowledge or control. I don’t access a lot of media in any given day but the Websites I visit, social media apps I use, and my smartwatch all collect and have access to data about me from my interests and contacts to my health data and unique identifiers directly linking this information to me as a unique person. All of this accumulated data can be combined however briefly to create a profile of me complete with inferences. This seems to be largely unavoidable even if I “went analog” tomorrow. The information and inferences could be linked directly to me if I logged onto another device at any given point in the future using unique identifiers.

I think this is mostly unsettling not because of targeted advertising but because the use and misuse of the large a large and growing data profiles about unique users is largely out of anyone’s control and data security breaches can lead to loss of privacy.

24-Hour Social Media Use

I chose to document my social media use on a Sunday for this assignment. My social media use is minimal and mostly limited to the same couple of platforms that I use to keep in contact. On the weekend I don’t check emails.

8:00 a.m. Instagram

I woke up and reached for my phone charging next to the bed. I opened Instagram and sent a direct message. Then I closed the app and went off to run a quick errand.

8:20 a.m. Instagram

I made a phone call over Instagram in my truck after getting home from running the errand that lasted approximately 25 minutes.

12:30 p.m. Instagram

I sent a direct message to a friend and then we spoke over a video call which lasted approximately 1.5 hours.

3:00 p.m. Instagram

I woke up from a nap and saw that I had several new message notifications. I responded to the reels that were sent and scrolled through Instagram feed to find more reels to send. I often find entertainment from scrolling through reels and sharing them although I try to refrain from doing it in the morning.

View this post on Instagram

A post shared by Memes by Myles (@mememanmyles)

4:00 p.m. Facebook Messenger

I sent several messages over Facebook messenger because I had unanswered notifications. I don’t use Messenger as often as Instagram to send messages.

8:00 p.m. Instagram X and YouTube

I made a phone call on Instagram and opened X to look for movie suggestions to watch on the video call. My long-distance girlfriend and I enjoy watching movies together. Making phone and video calls over Instagram allows us to speak without paying for long distance calls and we often stay on the phone just to watch movies together. We selected the movie Terrifier because I saw the 3rd installment is being released in October and I rented it on YouTube.

11:00 p.m. Instagram

I was still on the call, and I plugged my phone in and went to sleep.

How Social Media Algorithms are Optimized for Engagement: Misinformation and Societal Implications

Algorithms on social media monitor and collect user data and recommend relevant content. These algorithms determine the relevancy of content i.e., posts, people, groups by optimizing for engagement according to this journal article on Social Drivers and Algorithmic Mechanisms. In this blog post I will explain how algorithms optimize for engagement, the societal implications of algorithmic optimization, and make suggestions about how algorithms can be better engineered to respond to these potential implications.

One implication often spoken about is that how social media algorithms are highly proprietary, and their mechanisms lack transparency according to this article on What is Social Media Algorithm? Engagement can be measured by number of clicks, likes, comments and time overall spent on social media platforms as defined in the article from SAGE Publications Inc about Social Drivers and Algorithmic Mechanisms. Because these algorithms are designed to be highly proprietary the implication is that echo chambers and filter bubbles contribute to social and political polarization and segmentation. Below is a graph showing a rise in the number of articles containing these key terms.

(PDF) Echo Chambers and Filter Bubbles of Fake News in Social Media. Man-made or produced by algorithms? (researchgate.net)

This article on Digital Media Literacy defines an echo chamber as “an environment where a person only encounters information or opinions that reflect and reinforce their own.” According to the article Social Drivers and Algorithmic Mechanisms, algorithms are constantly adapting to user engagement and are updated on social media platforms. Facebook’s algorithm initially optimized content for engagement based on metrics like clicks, likes and comments. It wasn’t long before malicious actors, fake accounts, and spammers figured out how to use these algorithmic metrics to their advantage. So Facebook updated the algorithm to “prioritize social interactions” by giving more weight to content posted by friends and family. In 2015 they added emotional rection buttons and posts that gained reactions were also given more weight.

Engaging content is not always quality content and in fact the same article on Algorithmic Mechanisms states that the more engaging a piece of content was the more it lacked in quality. When Facebook initially added emotional reaction buttons the angry reaction was weighted five times more than the like reaction, favoring lower quality content that elicited strong emotional responses, and was more likely to be partisan or contain falsehoods. In response to concerns, Facebook reduced the weight of the angry response to zero in 2020.

While Facebook and other social media platforms respond to concerns about low quality content by updating algorithmic metrics, this article from Research Gate about Echo Chambers and Filter Bubbles of Fake News in Social Media begs the question, “Are echo chambers and filter bubbles of deceptions and fake news man-made or produced by algorithms?” And the answer is both. This article from Reuters about The Truth Behind Filter Bubbles defines them as “a state of intellectual or ideological isolation that may result from algorithms feeding us information we agree with, based on our past behavior and search history.” 

In essence, filter bubbles are more or less the product of engineered algorithmic feedback loops and because these algorithms are complex systems, the creation and perpetuation of filter bubbles is often out of control of engineers to some degree. This is also due to there being vast scales of online social networks in comparison to offline social networks and the proprietary design of algorithms and the content being shown to each individual user. This gives more of an opportunity for exposure to misinformation, low quality content, and falsehoods. An article titled It’s time to stop trusting Facebook to engineer our social world warns that falsehoods tend to spread more quickly across vast social networks because they generate more engagement.

The same article uses the term “business internet” to describe how the business models of social media platforms optimize for engagement, which means they automatically optimize for the spread of falsehoods and other low-quality content.

To curb the spread of falsehoods and misinformation which could be a contributing factor in political and social polarization, Meta, which owns Facebook and Instagram monitors for this type of content by using third-party fact checkers. Posts that contain misleading information are marked by fact checkers and given a lower weight. On Facebook’s website it states that posts are only removed if they explicitly violate community standards. One possible implication of this is that creating and spreading false content can be done in less time than it takes to fact-check. Additionally, Meta prioritizes giving flagged content less weight instead of removing it altogether. It has been speculated that this is because engagement turns a profit, and that removing instead of giving less weight to this kind of content goes against their own business model.

The article Social Drivers and Algorithmic Mechanisms suggest that algorithms can address these implications and be improved by implementing different optimization metrics, or by taking a different approach or additional measures to intervene when false content and disinformation is detected by fact checkers without needing to remove the content or demerit their business models.

One suggestion given is to implement a metric that optimizes for content that is not partisan or as highly proprietary and give more weight to trustworthy moderate news sources and affiliated groups. Secondly, by adding additional time between when a post can be viewed or shared if it is flagged by fact-checkers in addition to giving it a lower weight could also help curb the spread of low-quality information which can be shared and reach many individual users nearly instantaneously because of weighted metrics and the likelihood it will be engaged with.

Assessing Current Platforms’ Attempts to Curb Misinformation

Black iphone 4 on brown wooden table photo – Free Social Image on Unsplash

In 2012 Facebook acquired the photo sharing app Instagram for 1 billion dollars. According to article on Medium, Facebook co-founder and CEO Mark Zuckerberg offered Instagram’s founder Kevin Systrom double what Twitter CEO Ev Williams was offering. Zuckerburg also assured that the app would not be integrated into Facebook and would be run like a separate company.

In 2021 Facebook changed Its name to Meta, although the social networking app would continue to be called Facebook. Meta owns Facebook and Instagram, although they are separately run. This blog post will look closely at what these two platforms are individually doing to decrease or eliminate the spread of misinformation.

Facebook

Blue and white number 8 photo – Free Symbol Image on Unsplash

The efforts that Meta is taking on Facebook to combat the spread of misinformation as stated on their website include removing content that violates community standards and monitoring for false content. Posts that contain misleading information are marked by fact checkers. If they don’t explicitly violate Facebook’s community standards the distribution of those stories is reduced.

Examples of posts containing misinformation that would be removed for violating community standards include posts that are likely to directly contribute to the risk of imminent physical harm or interfere with the functioning of political processes.

According to this article author Nick Cauldry says that it is Facebooks business model to circulate posts that optimize both engagement and profit. Cauldry also states that falsehoods generate more engagement. This means that the goals of disinformation operators closely align with Facebooks business model.

The community standards are publicly available on Facebook’s website. Facebook works with 3rd party fact-checkers to mark posts as false or misleading. If a post does not explicitly violate them, it’s distribution will be reduced. These fact-checkers can rate posts by selecting different options including false, partly false, missing content, or altered. Then, notifications are sent to anyone who tries to share a post that’s been rated by a fact-checker.

Fact-checking is Facebook’s central way to combat the spread of misinformation on the platform. It is of some concern whether the third-party fact-checkers have enough oversite or accountability to be effective according to this article on The BMJ. And of course, there is room for human error. Another article on the possible downside of fact-checking on Facebook brings another concern to the forefront of the discussion. Creating and spreading false content can be done in less time than it takes to fact-check. Even when a post has been publicly marked as containing false or missing information, if it doesn’t violate the community standards it will only be shown lower on the feed.

Given the propensity of human error when it comes to identifying fake information on the internet, a post that has been marked still does not prevent it from being shared or that the marked post actually contains false information. These efforts could be improved by tightening the community standards or by removing all posts containing false or missing information even at the cost of profit and engagement.

Instagram

Pink and white square illustration photo – Free Social media Image on Unsplash

Instagram only started using fact-checkers in May of 2019. This may be because there are over 2 billion active daily users on Facebook and only 500 million on Instagram which has grown from 25 million since it was acquired in 2012 according to an article on Medium.

In addition to fact-checking, since it is a photo sharing app, Instagram marks matching content by utilizing image matching technology in an effort to combat the spread of misinformation. Not only is marked content ranked lower, but Instagram also takes an additional step by removing it from the explore and hashtag pages.

Once fact-checking was implemented, content across both platforms (Facebook and Instagram) could be marked automatically to reduce its distribution. For example, if disinformation operators use both platforms to distribute content, all posts across both platforms would be marked and either removed or appear lower across all feeds.

Instagram could be more transparent about the image matching technology it uses as disclosed on its website. Some potential issues about the ethics of image matching and facial recognition include racial bias, lack of informed consent, and mass surveillance. The platform has also implemented a false information feedback option, where users can report posts from their own feeds to have them reviewed by fact-checkers. While this may help to locate these posts in less time, it does not necessarily mean that all posts being reported contain any false information. It is possible then that other posts actually containing false or misleading information are being overlooked by fact-checkers who are instead rating posts that have been reported by users.

Claim Analysis

https://www.instagram.com/p/C36FQONNyDg/?utm_source=ig_web_copy_link

I will be analyzing the claim made by this sponsored post on Instagram. By using the SIFT method to verify this claim I hope to verify the claim as true or false and explain why it is important to analyze potentially false or misleading information.

Stop

The first step in the SIFT method for verifying information is stop and think about whether the information sounds too good, bad, or weird to be true or uses emotionally loaded language. Claims that sound too good to be true most likely are, and persuasive-language techniques are used to influence. I highlighted in the image below the loaded language and terms I noticed being used. The quotations around words indicate criticism or skepticism and convey a negative attitude toward what other people supposedly have to do in order to be successful as Airbnb business owners. The use of other loaded language and terminology like, corporate slaves, tons of debt (or credit) are also loaded words with negative connotation. Not to mention the claim that you can make an exorbitant amount of money as an Airbnb business owner without purchasing rental property sounds too good to be true.

Investigate the Source

The second step in the SIFT method is to investigate the source. If the source is not familiar, then investigating it will often provide the necessary information to verify the truthfulness of a claim. The source of this post is an Instagram profile strwealthacademy which has 25.5K followers and 302 posts. The account was made in May 2016 and has been active for almost eight years which does make the source seem potentially credible. The claim on their bio “We help 9-5’ers replace their income using Airbnb w/o owning property” does not sound too good to be true and states the purpose of the content with a link to a FREE Masterclass.

Find Better Coverage

The third step in the SIFT method is to see how many other sources are covering the same thing, or in this case, offering the same content with the same claims. If other sources are covering or offering the same content, then the claim is more likely to be verifiable. The course that is being offered is not one that is being offered by a lot of different sources. The use of loaded language and claims that seem too good to be true are what make this course seem like a get rich quick scheme. There is another suggested profile to follow, biggerpockets who is also offering freebies and information on real estate investment. The profile has more posts, followers, and has been on Instagram for longer. There are no claims on the bio that sound too good to be true, or misleading other than the offer of a free membership. This is the only account suggested to follow with similar content.

Trace Claims, Quotes, and Media to the Original Context

The final step in the SIFT method if to trace the claim to the original context or source. The link to the masterclass offered by strwealthacademy took me to the landing page in the image below. Something about the landing page that seems untrustworthy is that there is no menu bar with an option to navigate to the site’s menu or learn more about the course or its founders. I deleted the end of the URL to see if https://strwealthacademy.com/ would take me from the landing page to the site’s main page which it did. However, it is not laid out well and there is no information about the course or its founders. Only the same unvaried promise of making over 1,000,000 as an Airbnb business owner without owning any Airbnb’s.

After analyzing this claim using the SIFT method, I was unable to verify it. With the only option being to sign up for the free masterclass to get more information on what is being offered, there is no way to verify the validity of it. Having an Instagram account with posts and followers does not make the source trustworthy, and the original source I traces the account back to, the website leads to nowhere.

24-Hour Media Diet: Spotting Misinformation

6:34 a.m.: Woke up and checked direct messages on Instagram. Then, scrolled through Instagram feed and saw different advertisements for economical and credit card debt relief.

The sponsored post was created by an account called Consumer Advocate Today. When I went to look at the account profile it was no longer available. It didn’t surprise me that Instagram removed it because the post was worded like attention getting click bait. The headline was misleading and fabricated a sense of urgency, claiming that you can get rid of $27,456 in credit card debt if you apply today before “it” ends. The post makes an implausible claim and is missing specific details.

https://www.instagram.com/p/C3_SfVbNeu1/?utm_source=ig_web_copy_link

8:20 a.m.: Scrolled through Instagram feed again and found another sponsored post for debt relief. The image is a screenshot from the video and uses sensational language and imagery and exaggerated claims. In the video the someone pulls hundreds of dollars out of an ATM and there is a reaction video of someone feigning surprise. The account HappeningNow USA shares the same profile picture as the profile u.happeningnow which commented “I’m still shocked. Best decision my wife and I have ever made.” That profile has also been removed since this morning.

https://www.instagram.com/p/C4BSun2Nizk/?utm_source=ig_web_copy_link

11:27 a.m.: Checked my Gmail and saw that I had also received an email promising credit card debt relief. Again, the email used sensationalized language, was vague and missing important information. The link to get a free estimate in the body of the email without additional information makes the post come across as misinformative.

The email address did not have the domain of a national debt relief program and seemed like a fraudulent email intended to look like a legitimate debt relief program from a trusted source.

2:47 p.m.: Called a phone number this afternoon when I was trying to change the passenger information on my upcoming flight. The phone number I called was 1-800-684-8331. Almost identical the United Airlines phone number. The voice recording would not give me any other menu options but to hear more about a special offer. I listened to the recording a couple of times and growing frustrated I hung up. I went back and viewed another email from United and saw their phone number is 1-800-864-8331.

This false contact information was purposefully disguised as credible, and likely fabricated to steal credit card and personal identification information from travelers with upcoming flights.

4:01 p.m.: Scrolled through Instagram feed again and found ANOTHER sponsored post for credit card debt relief. Shoker. The claims were not credible, were sensationalized, and exaggerated. There is not enough information to back them up and the account California Debt Relief has 0 posts, only a link to their website.

Their website contained more of the same content, and after a quick google search, I found that www.californiadebtrelief.org has a low trust score of 58.8/100. Although associated with a legitimate agency, this report rates the site as having an active medium-risk score.

californiadebtrelief.org Reviews: Is this site a scam or legit? – Scam Detector (scam-detector.com)

https://www.instagram.com/p/Cplo1gLjBz9/?utm_source=ig_web_copy_link

I suspect that third party data collected during my Google search yesterday on personal loans and credit cards is how I have received both sponsored posts and emails about debt relief programs. This is usually more questionable content than I see in a day. I did not expect to see so much with my limited online media usage. Anything that shares similarities with the posts and email mentioned in this blog post, I do not click on because it immediately seems misinformative and non-trustworthy.