Introduction
In the 21 st century, social media sites such as Facebook, Twitter, and Instagram, among others, have emerged as essential tools for families, coworkers, and friends to keep in touch and interact with one another. Individuals use the sites to share videos or photos, provide updates on important events to their customers, friends, or families, and coordinate meet-ups ( Surette, 2016) . However, social media sites also act as tools used by individuals for criminal and nefarious purposes ( Surette, 2016) . For instance, the article by Newcomb (2017) presents a story where Steve Stephens broadcasted himself via Facebook video gunning his a person in cold blood. The video fueled discussions regarding broadcast content regulation over social media sites. This essay presents a review of Facebook's ethical or legal duty to rescue a crime victim.
Facebook’s Ethical or Legal Duties to Rescue Crime Victims
To some degree, Facebook does not have the legal obligation to rescue victims of crime but has the ethical duty to do so. Failure to act constitutes a crime only when the law imposes affirmative action duties. Individuals lack any legal obligations to rescue others unless the persons in danger and the potential rescuers possess special relationships. Therefore, Facebook or other social media sites do not have a legal obligation to rescue crime victims because their role is not to fight crime. Legally, individuals obligated to rescue others undergo special training on emergency handling and law enforcement. However, in case the social media sites have a capacity to detect apparent crimes in real-time, they can inform relevant authorities rather than act as bystanders.
Delegate your assignment to our experts and they will do the rest.
In the case of Steve Stephens, Facebook acted as a bystander, whereby it witnessed a person conduct a crime, but failed to act. Legally, no laws found Facebook culpable of committing a crime, but it committed an offense by failing to report. In some cases, Facebook could prevent crimes even before they happen. However, Facebook is not a person, but software which cannot be held responsible for crimes, but its employees can help in preventing crimes if its policies allowed pulling down videos before its users report them. Facebook’s policy only enables its employees to review content for possible deletion after users report. Therefore, Facebook's employees remain ignorant of crimes for as long as no other Facebook users report. However, there exist situations where Facebook can be held legally or ethically responsible for crimes, as a social media proprietor. For instance, a kidnapping scenario progressing for more than fifteen hours, where kidnappers stream the live action on Facebook, the organization has legal and moral obligation to report the issue to the relevant authorities ( Justi ce Law Website, 2018 ) . Failure to report would hold Facebook culpable of negligence.
Social Media Sites’ Pro-activeness in Content Reviews
Social media firms encounter increased scrutiny regarding the nature of user content they allow on their platforms. A wide range of people expresses concerns that social media sites do not do enough to counter any offensive, harmful, and even false content ( Brannon, 2019). One way that social media sites can proactively review contents would include appointing a content review board. The board would establish ethical guidelines to guide the sites’ managers and staff on prompt actions to address content that does not align with the company's values by pulling down posts, photos, or videos. The team should establish policies that strive to reduce or eliminate cyberbullying.
The second strategy to proactively address content posted on social media platforms would include improving censoring technologies used by the sites. Social media platforms such as Twitter, Facebook, and Instagram, among others, should develop technology with artificial intelligence. The technology would effectively preempt users' posts and block the sharing of contents advocating violence towards a wide range of groups. Artificial intelligence technology would promote adequate review of content before posting to ensure the filtering of every content before users can review to control crimes. The adequate review time would delay all live broadcasts to create time to delete harmful content and deactivate the users’ accounts.
The third intervention would include social media sites developing technologies that can prevent re-sharing or re-posting already filtered and pulled down contents. The sites could also have filters that delete posted violence, graphic images, sexual misconduct, and other improprieties before users view them. The strategy would proactively reduce or scrap cyberbullying.
Safeguards to Prevent Broadcasting Acts of Violence
Uncontrolled and unconscious use of social media power may promote infringement of personal rights, the spread of hate speech, symbolic violence, and psychological attacks without the consent of interested parties. The sites may deceive individuals with fake accounts that spread negative discourse with intentions for mobbing, abusing, harassing, and insulting as well as with the spread of malevolent views and information ( Mengü & Mengü, 2015) . Facebook and other social networking sites can prevent the broadcast of acts of violence through extensive monitoring and investing in data mining technology.
Social networking sites can prevent sharing or posting of acts of violence through extensive monitoring. Implementing 24-hour monitoring programs in sites such as Facebook, Twitter, and Instagram would reduce cases of sharing content that promotes violence. For instance, the sites can hire employees obligated to monitor social media platforms day and night. In addition to the 24-hour monitoring programs, the sites should put in place a hotline for individuals to call and inform about suspicious posts.
All social media sites should invest in data mining to prevent the broadcast of violence-promoting content. Data mining technology would enable the use of algorithms capable of detecting acts of violence, graphic images, and sexual misconduct. The technology would help the sites track moves of people as they record locations and movements on a daily basis ( Singh, Kaverappa, & Joshi, 2018) . However, the data mining approach should not violate people’s right to privacy. The sites can also use algorithms to preempt content before users post on their platforms. In order to avoid violating the First Constitutional Amendment, the sites should first involve users in public scrutiny of all the algorithms to develop a benchmark.
Facebook’s Oversight Committee or Ethical Officers
Facebook’s corporate governance lacks an oversight committee and ethics officers. According to Neidig (2018), an investor at Facebook urged stakeholders to vote to have an oversight committee for the firm, arguing the need for supervision after a wide range of controversies at the company. The investor insisted that the firm’s executive spends time providing technical solutions to controversies about mental health, privacy, and illegal activities. However, Facebook would never find answers to the issues without additional guidance and strategic visions about the firm’s role in society as well as the risks it creates to itself, people, and the community. The hiring of a chief ethics officer at Facebook would indicate the firm's efforts to achieve ethical responsibility. The chief ethics officer should have a responsibility to serve the firm's internal control points for improprieties and ethics. The officer would effectively help the firm's executives to think through difficult ethical decisions critically and help set ethical guidelines for the organization. Facebook should also form an oversight committee to enforce community standards' ethic laws and guidelines. The committee should have a responsibility to investigate violations and complaints and provide constructive solutions through ethics training. The committee should have powers to penalize individuals who break the rules, regulations, and community standards.
Changes to Promote Ethical Use of Facebook Platform
Facebook needs to invest in a research mission regarding its platform usage by the public. The firm should integrate its research into its product development lifecycle to promote their ethical consideration. The ethic department should guide the research to ensure operations align with ethical standards. The research should incorporate the firm's values into their performance reviews to encourage employees to act ethically and translate the values to clients or social media users.
Facebook should promote a culture of ethics between its employees and platform users. For instance, the organization should regularly hold moderated conversation groups that encourage staff and users to share content that addresses compliance and ethics. The strategy would enable Facebook to ensure ethic professionals comment and share their opinions while educating users. The shared content would create a vast information pool on ethics and encourage comparison. Facebook should use various platforms to raise consciousness towards the potential impacts of uncontrolled and unconscious use of social media ( Mengü & Mengü, 2015) . Instead of merely verifying social media, the organization should raise consciousness for social media literacy to highlight measures to prevent and protect against offensive contents as well as promote appropriate and conscious social media use.
References
Brannon, V. C. (2019). Free Speech and the Regulation of Social Media Content.
Justice Law Website. (2018). Criminal Code. Retrieved from https://laws-lois.justice.gc.ca/eng/acts/C-46/section-219.html
Mengü, M., & Mengü, S. (2015). Violence and Social Media. Athens Journal of Mass Media and Communications , 1 (3), 211-227.
Neidig, H. (2018). Investor calls for oversight committee at Facebook. Retrieved from https://thehill.com/policy/technology/383618-investor-calls-for-oversight-committee-at-facebook
Newcomb, A. (2017). leveland Shooting Highlights Facebook's Responsibility in Policing Depraved Videos. Retrieved from https://www.nbcnews.com/tech/social-media/cleveland-shooting-highlights-facebook-s-responsibility-policing-depraved-videos-n747306
Singh, N., Kaverappa, C. B., & Joshi, J. D. (2018, July). Data Mining for Prevention of Crimes. In International Conference on Human Interface and the Management of Information (pp. 705-717). Springer, Cham.
Surette, R. (2016). How social media is changing the way people commit crimes and police fight them.