Reading 14: Computer Science Education

After 7 semesters in the Computer Science department, it was interesting to spend some time reflecting on the classes and learning objectives that I’ve dedicated so much of that time to. While imperfect, I absolutely believe that the Notre Dame CS department has met its accreditation requirements and, beyond that, prepared me to succeed as a computer scientist.

It was interesting to read the ABET accreditation guidelines and compare them to the department and the coursework that I have participated in during my time at Notre Dame. For the most part, I think the CS curriculum requirements are solidly met by the Notre Dame Computer Science program –  “coverage of the fundamentals of algorithms, data structures, concepts of programming languages… an exposure to a variety of programming languages and systems, proficiency in at least one higher-level language”, among other things. Perhaps the only area in which I think the curriculum is lacking is “coverage of… software design” – while this is an elective offered at Notre Dame, I don’t feel like any of the classes in the core CS curriculum adequately cover software design or development practices. However, I feel like our curriculum is pretty robust and, even compared to other accredited CS programs, provides CS students with a very broad and solid foundation of CS skills.

I was surprised by how broad the ABET learning goals are for accredited programs – I’m not sure how they could accurately quantify “an ability to communicate effectively with a range of audiences” or “an ability to analyze the local and global impact of computing on individuals, organizations, and society”. While the CS department did teach me the skills to do things like “apply knowledge of computing and mathematics appropriate to the program’s student outcomes and to the discipline” and “analyze a problem, and identify and define the computing requirements appropriate to its solution”, I feel like the “soft” skills required by the learning goals – such as the ability to work on a team, communicate effectively, “analyze the local and global impact of computing”, and prioritize engagement in continuing professional development – were developed much more by my experiences outside of the CS department, specifically in my Arts & Letters classes and ROTC classes. That being said, I feel pretty confident that my four years at Notre Dame have prepared me to meet all of these learning outcomes, and part of the robustness of the Notre Dame program is that it pushes students to acquire skills and techniques from classes/programs outside their major.

To me, these skills gained outside of CS classes are nearly as important to being a good computer scientist as the ability to code itself. And this is why getting a college education is so important, and in my humble opinion, far surpasses any outcomes of attending a coding bootcamp. A college degree implies learning and understanding beyond just being able to type code – such as learning about the underpinnings and theory of algorithms or structures, learning how to speak and write and communicate, and how to think critically. As the Triplebyte Blog article “Bootcamps vs. College” summarizes, “bootcamp grads match or beat college grads on practical skills, and lose on deep knowledge.”

Now, if a person simply wants to be good at writing lines of code, or if a company is looking to hire a practical programmer good at writing hundreds of lines of code, then a bootcamp is a great option. In fact, this might even be the better option for some situations, and Triplebyte Blog says that “it’s really incredible how quickly and how well the best bootcamp grads learn.” However, I think it is clear that bootcamp programs cannot replace a college degree. Nearly every skill divulged in the article “What every computer science major should know” is a skill that, I believe, only college CS majors are learning – how to communicate ideas to non-programmers, the Unix philosophy, how to teach yourself new programming languages, discrete math, data structures, and algorithms, theory, architecture, etc. These topics require critical thinking and learning beyond what a bootcamp teaches, and are fundamentally important to becoming a programmer who is able to understand all levels of a problem, come up with new and creative solutions, adapt to changing technology, and innovate and think outside the box.

So, do you need to go to college to become a good programmer? No. But do you need to go college to be an ethical, innovative contributor in a field like software development or computer science? I think, on the whole, yes. While I don’t know everything that I could want to know, I do know that my Notre Dame education has provided me with a solid foundation of knowledge and, more importantly, the skills to continue to learn and contribute after graduation. I am grateful for the challenges and opportunities that this school and this department has presented to me, and I feel confident and ready to meet whatever comes next!

 

Reading 13: Piracy

This past week I had my closest-ever brush with the law – I was almost a pirate. A digital pirate that is. It was a cold South Bend night, I was curled up on my futon blissfully wasting away my Thanksgiving Break, and all I wanted to do was watch the new movie A Star Is Born – except I didn’t want to drive to a movie theater. I scoured the Internet for any legal way to watch it online, and finally in desperation texted my Internet whiz brother, who quickly responded with the names of a few underground pirated streaming services he uses. I typed the sites into my browser… paused… and finally, begrudgingly, sighed, closed my computer, and turned on Netflix instead.

Was it some ethical moral compass that stopped me from streaming a pirated movie? No. Was it the fear of a virus or some otherwise damaging malware? Also no. What stopped me was the fear of retribution – to some degree, from some big unknown corporation that tracks and prosecutes these things, but much more so from Notre Dame if they were to find out and kick me off the campus wifi.

Clearly, I don’t find piracy morally reprehensible on an individual level. For me, and perhaps many others, I think the largest reason for this is that the people or groups we steal from or infringe on by pirating feel very abstract. Will some movie production company crash and burn if I don’t pay to watch their movie, or will some big-shot music star be out on the streets if I don’t pay to download their music? Probably not. I wouldn’t illegally download or share music from no-name artists struggling to get by, so can’t a movie company cut this no-name college student struggling to get by a break too? I’m being snarky, I know, but my point is that piracy is not widespread enough, at least in my world, to warrant an expensive lawsuit or a revocation of Internet privileges.

Down to the meat of things. Most simply put in the How-To Geek “What Is the DCMA, and Why Does It Take Down Web Pages?”, “The Digital Millennium Contract is a US law passed in 1998 in an attempt to modernize copyright law to deal with the Internet.” This legislation criminalizes infringement on copyrights online; in terms of piracy, the most relevant provisions of the DMCA include the anti-circumvention provision (criminalizing the circumvention of any sort of technological access control) and the safe-harbor provision (service providers are not liable for any content infringing on a copyright in their service if they are not aware of it and take it down once they are alerted. There are some important things that these provisions do, such as allowing platforms like YouTube to grow since they cannot be run into the ground with lawsuits for content they are unaware of; however, at the same time, the EFF article about the DMCA argues that “Congress ostensibly passed the ‘anti-circumvention’ provisions of the DMCA to discourage copyright ‘pirates’ from defeating DRM and other content access or copy restrictions on copyrighted works, and to ban the ‘black box’ devices intended for that purpose. In practice the DMCA anti-circumvention provisions have done little to stop ‘Internet piracy.’ Yet the DMCA has become a serious threat that jeopardizes fair use, impedes competition and innovation, and chills free expression and scientific research.”

While I didn’t find the possibility of streaming a pirated movie particularly morally troubling, I do think that there is a lot of moral gray are in users downloading and sharing copyrighted material, and I do see some necessity for legislation like the DCMA. In theory, piracy is troubling because individuals can use and distribute other people’s product for their own benefit. While it does not seem like a crime to rip a DVD or to view/use a piece of media you already own on a different platform (which, according to the How-To Geek article, is a crime under the anti-circumvention clause of he DMCA), I agree that it should not be legal to distribute the copyrighted material or intellectual property of some artist or company for your own personal gain – especially if you were just “sampling” or “testing” the material.

However, as pointed out by multiple articles, I don’t see this issue of piracy as a very threatening problem. Piracy may be a real problem, but because of DMCA regulations and the emergence of streaming services such as Netflix and Hulu, piracy is simply becoming more obsolete, especially among my generation. As the Slate article “Goodbye to Piracy”, which details the transformation of attitudes and actions on privacy, points out, “Piracy was becoming too expensive and time-consuming—after a certain point, it was cheaper to subscribe to Spotify and Netflix… Finally, [entertainment industries] changed course and adopted new technologies to provide unlimited access. People, especially young people, scrambled to sign up, and generational attitudes toward copyright rapidly reversed, precipitating a cultural shift.”  According to the Plagiarism Today article “The Long, Slow Decline of BitTorrent”, “This is supported by the numbers as well. Netflix, YouTube, Amazon Video, iTunes and Hulu combine to make up well over 60% of all peak internet traffic in North America. That’s 20 times the estimated size of BitTorrent… Simply put, when a month of unlimited streaming comes costs less than a lunch, people snap it up. Even more so when the library of content is sound and the ease/reliability of streaming is very high.”

In my opinion, the reality is that, even though the protection of patents, copyrights, and intellectual property is essential to innovation and independence, and even though piracy can infringe on that, it’s not something that is threatening to take down our entertainment or media industries. Most people, like myself, are perfectly happy to pay for our Netflix and Spotify subscriptions to get our content legally nowadays. And as such, at the end of the day, I’m just not sure it’s such a big deal for an upstanding college student like myself to pirate a movie every once in a while.

Reading 12: Self-Driving Cars

The Tesla press release article “All Tesla Cars Being Produced Now Have Full Self-Driving Hardware” clearly lays out the motivation for developing self-driving cars: “Self-driving vehicles will play a crucial role in improving transportation safety and accelerating the world’s transition to a sustainable future. Full autonomy will enable a Tesla to be substantially safer than a human driver, lower the financial cost of transportation for those who own a car and provide low-cost on-demand mobility for those who do not.” Supporters argue that self-driving cars, will, in the long run, make our roads safer and more sustainable. However, the arguments against self-driving cars, which include the criticism of AI’s ability to safely share the roads with humans and make life-or-death decisions, are perhaps even more compelling.

The abstract of the Science article “The social dilemma of autonomous vehicles” begins with this ominous statement: “Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians”. From the article (and probably from common sense), it is clear that most people “approve of autonomous vehicles that might sacrifice passengers to save others, [but] would prefer not to ride in such vehicles” – in other words, we would prefer that AVs opt for self-sacrifice in a dire situation, unless it is our own life on the line. This question of how an artificial intelligence should approach life-and-death situations is, of course, not easily answered, and I don’t even think there is a right answer. I do think, however, that the entire question is in itself an argument against self-driving cars – the burden of the consequences of these decisions will ultimately have to be shouldered by humans, and therefore the decisions to lead to these results should also be made by humans. Whether AI makes the “right” or “wrong” decision in situations like these, the cost of the event will be fully human. A human faced with the moral decision to sacrifice themselves to save another life, or to take another life in order to save their own, will always bear the burden of that decision, and should therefore be afforded the right to decide in the first place.

Another major argument against self-driving cars is that they might never be able to fully mimic and safely interact with humans sharing the road. The Quartz article “Self-driving cars still can’t mimic the most natural human behavior” explores all the ways which we as humans intuitively assess road conditions in order to make the safest decisions based on the judgements of humans around us – judgements like “that person is not going to yield,” “that person doesn’t know I’m here,” or “that person wouldn’t jaywalk while walking a dog.” Is that bicyclist going to turn left or stop? Is that pedestrian going to take advantage of their right-of-way and cross? Deep-learning tactics would typically be used to “teach” these judgements to AI, but as the article points out, “humans can make surprisingly accurate judgments about other humans because we have an immensely sophisticated set of internal models for how those around us behave. But… how do you label images with the contents of somebody’s constantly fluid and mostly nonsensical inner monologue?” It seems that computers might never be able to safely interact with humans in complicated road situations.

I personally do not trust self-driving cars, and I don’t think I would have one myself. I think that the human cost of developing the technology is potentially too great, and besides, I enjoy driving – why would I want to turn that task over to an AI if my time in the car will be the same? However, I also acknowledge that the reality of self-driving cars currently appears unavoidable, with the huge investment and technology production from companies like GM, Ford, Tesla, Uber, and Google. In this case, companies have a responsibility to ensure that the technology they put on the roads is as thoroughly tested and as safe as it possibly can be, and local and state governments have the responsibility to ensure that the technology they are putting on the roads meets this standard. Self-driving cars may be the way of the future, but we cannot allow it to jeopardize the future of our citizens during its development.

 

Reading 11: Automation in the Workforce

According to the readings, while it is clear that automation is, and will continue to, impact human employment, it is not altogether clear exactly to what degree AI will replace human workers, or what effect this will have on the whole workforce. According to the Wall Street Journal article “The Rise of Job-Killing Automation? Not So Fast”, one key reason for the uncertainty is “that as new technological capabilities develop, only certain aspects of jobs are replaced by technology. Substitution of smart machines for human labor works at the task level rather than at the job level”. But the line between these “aspects” of jobs is unclear. We know that new technology can completely wipe out the need for jobs like gas pump attendant or even cashier, but surely AI cannot replace all human work – but the uncertainty of how far AI might go in the workforce can be scary.

Among the readings, there seemed to be a pretty clear consensus that, in the short term, automation will bring about job loss and painful restructuring of labor. Sure, there are jobs that AI will never (or, at least in my opinion, should never) be able to do, such as full-time caregiving or making life-or-death decisions; but many tasks can and will be taken over by AI While automation might eventually free humans for other endeavors, the reality is that, in the foreseeable future, it will probably also really suck for workers whose jobs will be affected by technology. Does this mean we should halt the development of automation technology? No – in the long term, as we have seen from past trends, automation can create new jobs, improve the economy, and propel our society forward. At the same time, we cannot deny that this shift will hurt people, especially workers with lower income and less education. As a society, I believe we have a responsibility to inform and educate these workers, and to take care of them during the transition as best we can. As the Technology Review article “Tech companies should stop pretending AI won’t destroy jobs” put it, “these changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that AI can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times”.

In my opinion, while it is an extremely complex idea, a Universal Basic Income may be a viable means of keeping workers afloat in the midst of these “worst of times”. I don’t think this is a long-term and viable solution, and it cannot be a replacement for other social services provided by the government, but I think it might be the best way of ensuring access for all people to basic needs like food and clean water during periods of unemployment or new job training. While I acknowledge some of the issues pointed out in the WIRED article “The Paradox of Universal Basic Income”, such as costing too much or leading to lower wages, I think the potential benefits outweigh these potential drawbacks – a UBI system could “shrink a huge array of costly social welfare services like health care, food assistance, and unemployment support by providing a simple, inexpensive way to let individuals, rather than the government, decide what to spend the money on”, and also “redistribute wealth and empower groups like stay-at-home parents”, and maybe even eliminate poverty.

At the end of the day, even though the transition to automation will be a painful process that requires a restructuring of the way we educate, train, and support workers, I believe it will be a good thing for humanity. We must remain sensitive to the human needs for income, a job, and a sense of purpose, but we must find a way to balance this with n increase in technological innovation and efficiency. I’m not sure exactly how this might be done, or if UBI is the best solution, but I am sure that it is something we must figure out. As the Newsweek article “How Artificial Intelligence and Robots Will Radically Transform the Economy” summed up so eloquently, “AI will give us a chance at cracking our most pressing problems. It promises to help us end cancer, ease climate change, manage bursting cities and get our species to Mars. Of course, we don’t know if we’ll succeed at any of that, but one certainty is that we can’t do it without AI”.

Reading 10: Fake News

“Fake News”. These two words, for me, conjure up images of President Trump yelling behind podiums leading up to the 2016 election, of Mark Zuckerberg testifying before Congress, and of countless media hosts repeating the words in a variety of contexts and stories since they were first publicized. From the readings, “fake news” is, first and foremost, manufactured news stories; but, going beyond that, it is also what Tim Weninger from the Inverse article names as “how propagandists and opportunists can leverage social dynamics and simple math to foment confusion or make a buck”. The term is used to call out groups or pages that propagate false or biased information – even through ways like online voting, as explored in the article “How the “Knights of New” Became Fake News Pawns” – that they make out to be real and often use to sway people to an opinion or mindset.

I believe that, to a degree, social media platform providers such as Facebook and Twitter do have a responsibility to regulate “fake news” and censor the information that spreads on their platforms. As explored in the Reutgers article “Why Facebook is losing the war on hate speech in Myanmar”, it is clear, according to a UN investigator, that “Facebook was used to incite violence and hatred against the Muslim minority group”. These platforms are easily used to influence people, their opinions and even their actions, and therefore social media providers must ensure that the content spread on their platforms is true, wholesome, and not violent or racist or extremist. While it is a little disconcerting to consider a private company censoring information, these companies already “censor”, or at the very least control, information that gets to you based on its popularity, your own interests and personal information, etc. These companies have a huge amount of power in influencing the population, and the responsibility to proper inform this audience is more important than protecting free speech for extremists, showing only content that will reinforce a person’s biases, etc.

In a similar way, news or content aggregators such as Facebook and Google also have a responsibility to regulate “fake news”. These entities absolutely have a large impact on the news that people read, and I believe they therefore have the responsibility to ensure that the news they show to people is credible and balanced, and not only affected by its popularity. Similar to Tim Weninger in the Inverse article, I think that we must help “make social media more immune to certain social influence biases” and “try to understand the dynamics that support our media distribution systems, and to provide people high-quality content”; how this is best done, I’m not sure.

Ultimately, the responsibility to stay fully and properly informed should fall on the the individuals who use these social media platforms and news aggregators – we, as citizens, have a duty to be responsibly informed. I don’t use Facebook or Twitter to get my news because I am absolutely concerned about the so-called “filter bubble”, and I believe all other citizens should do the same. However, the reality is that most people will continue to be shaped and influenced by those platforms, and therefore they must be responsible and accountable for the news and information they propagate.

For this reason alone, our focus on “fake news” is absolutely warranted. However, “fake news” cannot be a political pawn used only when it suits a certain message or party – and that is the even greater danger. According to The New Yorker article “How Russia Helped Swing the Election for Trump”, President Trump will not acknowledge any probe into Russian interference in the election and “refuses even to discuss it. In public, Trump has characterized all efforts to investigate the foreign attacks on American democracy during the campaign as a ‘witch hunt'”. Using “fake news” to scare people into an opinion, or using it as an excuse to disregard legitimate, if uncomfortable, facts, is perhaps even more damaging than fake news existing in the first place. We must all be vigilant against fake news, and demand that all events be thoroughly and fairly reported.

Reading 09: Net Neutrality

From my understanding, Net Neutrality is basically the assurance that all web providers provide equal access to all web content. The New York Times article “Net Neutrality Has Officially Been Repealed. Here’s How That Could Affect You.” lists three main rules that Net Neutrality legislation enforced: Internet service providers could not block any lawful websites or apps; providers could not purposefully slow the transmission of any lawful websites or apps; and providers could not speed up internet for companies or individuals who paid a premium. The obvious importance of this policy is that it ensures that big internet providers cannot enact policies that favor certain individuals or companies – in simplest terms, the internet remains a service guaranteed equally to all. Those in favor of the repeal argued that Net Neutrality “restrained broadband providers like Verizon and Comcast from experimenting with new business models and investing in new technology” and that “before they were put into effect in 2015, service providers had not engaged in any of the practices the rules prohibited”.

According to the IEEE article “Is Net Neutrality Good or Bad for Innovation?”, there is no clear consensus on whether keeping or repealing the Net Neutrality legislation would be better for consumers, innovation, and the economy; it is not clear “how much faster or slower content might be delivered, or what fees an ISP would charge for each service”, and for consumers, “if ISPs could charge content producers more to cover the expense of maintaining their network, they may charge consumers less for home service. Of course, content producers could also wind up passing along the cost of the extra fees they must now pay to consumers, zeroing out any cost savings from ISPs”.

Even though it is not totally clear what long-term effects repealing Net Neutrality may have, I believe that in this case, we must err on the side of caution. Maintaining Net Neutrality legislation would ensure that consumers would get fair and equal access to Internet services, and while there is a possibility that producers may have greater room to innovate and consumers might benefit from lower costs, this is outweighed by the possibility of the reverse happening. At the end of the day, it is true that “the Internet is a public service and fair access should be a basic right”, and this must be protected by regulations like Net Neutrality. And in fact, according to the IEEE article, this may in fact be the better move for innovation, since “the greatest threat to innovation is if new companies, innovative companies, have to pay a lot to be on the same playing field as everybody else”. I think the Net Neutrality legislation enacted under the Obama administration is the best way to protect and ensure innovation and the right to Internet access.

In a larger scope, this speaks to a larger question of whether the government has a role to play in a free market. Just as in this situation, I believe that an “unbridled” free market is extremely important, but that the government does have a responsibility to set the boundaries, if you will, for the market in order to ensure a level playing field. The government should not strive to directly hurt innovation or free market competition, but also must ensure that the actions of companies, which are usually inherently profit-driven, do not crush individuals or new businesses – just as I believe the Net Neutrality legislation ensures.

Reading 07: Retaining Privacy Despite Targeted Online Advertising

It is disturbing to consider that every mouse click and keystroke that you make in a browser window, or every credit card swipe you make at your local Target, can be recorded and amalgamated to create a profile on who you are as a person. That companies can track and purchase personal data in order to predict your next major life event or to “[send] you coupons for things you want before you even know you want them” (NY Times article), seems like something straight out of a 20th-century science fiction novel. Incredibly, this is the reality that we live in. The question is, what is the ethical lines for companies in the collection and analysis of personal data for advertising purposes, and at what point does the responsibility for tracking these activities fall on the individual?

While disturbing, I wouldn’t go so far to say that it is ethically corrupt for companies to collect personal information for advertising. As was clearly shown by the Pew surveys, very few people actually take steps to deny this information to companies, including “changing their privacy settings on social media (17 percent); using social media less often (15 percent); avoiding certain apps (15 percent); and sometimes opting for face-to-face conversation instead of using the phone or Internet (14 percent)” (The Atlantic article). We may not be aware of all the ways in which companies collect data because we don’t take the time to read the ridiculously long Terms of Service documents they put out, but the fact remains that we CHOOSE not to read and understand these documents. Companies certainly have a responsibility to disclose what information they collect and analyze, but users then also have a responsibility to read that information if they want a say in how their information is used. As long as companies are collecting their data with permission, it becomes the individual’s responsibility to track and understand these activities. Our privacy extends as far as we demands that it does.

Just like it is our right to have access to information on what data companies have access to and what they are going to do with it, is also our right to deny this information to companies. By this same token, it is our right to refuse content we don’t want, such as ads. Despite the power of these huge tech giants, no one owns the internet; although it is nearly unbelievable nowadays, no one owns our personal internet experience either. Just like we have the power to decide what we click on, or what we type into a search bar, we should also have the power and the freedom to choose which content – including ads – we see and don’t see.

I believe companies should be allowed to use personal data that they have specifically been given permission to access by users for their own purposes; however, users have the right – and the ability – to understand what information they are turning over, and to refuse content. As uncomfortable as it may be to think about, is it really so bad for companies to use information that we don’t restrict in order to show us products that we might like? Just like anything else – whether it be that cookie that’s calling your name in the break room, or the urge to take a nap instead of finish a project that’s due – ultimately, we have the power to let these advertisements affect us or not. Perhaps this is overly idealistic, but I still believe that we have the power to use the internet smartly and compartmentalize all the targeted posts and advertisements we see; companies may be able to figure out what we like and dictate which advertisements are targeted at us, but they will never have the power to dictate what we do with the information and products that flash on our screens.

 

Reading 06: Necessary Constraints on Privacy

This country is built upon the conviction that we are entitled to “life, liberty, and the pursuit of happiness”; an infringement upon these rights is simply and fundamentally un-American. Many would argue that our right to privacy is protected by these basic rights, that we have the liberty to conduct our lives without the threat of government infringement or interference. However, as uncomfortable as it might be, we must ask ourselves at what point denying personal privacy, more than protecting personal privacy, guarantees these rights to our citizens at large.

We must find a balance between these two things; in other words, large tech companies like Apple are ethically responsible for protecting the privacy of their users, AND are ethically responsible for helping to prevent violent or harmful activities that their platforms may enable. In “A Message to Our Customers”, Apple is right that the fact is people nowadays use phones “to store an incredible amount of personal information, from our private conversations to our photos, our music, our notes, our calendars and contacts, our financial information and health data, even where we have been and where we are going”, and that “compromising the security of our personal information can ultimately put our personal safety at risk”. In creating the structures to store this information, Apple does have a responsibility to protect it.

At the same time, it would be naive to think that no one would ever use this privacy provided by tech companies like Apple to hide illegal and dangerous activity – we must find a way to access this data when warranted and requested through the proper channels which, yes, may infringe on individual privacy, but can ultimately lead to greater safety and protection. Would this best be accomplished by weakened encryption or the creation of backdoors? Maybe not; I’m not sure I have a balanced solution for this problem. But to the larger question of whether technology companies have a responsibility to allow the government to collect this basic information on their users in order to aid in this data processing for national security, I think the answer is yes.

However, the government should not be able to access people’s information without checks and without public knowledge; there is a need for greater transparency in what data the government collects, and why it is important. Huge leaks like Snowden’s are so explosive because people feel left in the dark about how and why the government operates. Making citizens feel more important or valued in the process of data collection and analysis might lessen animosity from the public towards the system, or at least make leaks like Snowden’s less damaging. The government ultimately works for the people, so while people don’t need access to all of the specific data, I believe they have a right to know on a high level what the government monitors, and why. This would work to remedy the feeling that “the government and corporate sector preyed on our ignorance” (The Guardian article), which prompted Snowden and other whistle-blowers to leak information and enrage the public.

The government is not demanding that citizens reveal their deepest darkest secrets. It does not closely examine every piece of your personal information they obtain just for the hell of it. Instead, it collects data at large to identify potential threats and to keep our population safe; in my opinion, if they need personal information in order to make this happen, then it is in all of our best interest to make sure that they have it.

While this is a complicated discussion with valid points on both sides, in my mind this question boils down to this: one person’s right to keeping their information private does not trump the greater community’s right to safety, to the fundamental right to life in this country.

Reading 05: The Fine Line Between Whistle-blowing and Traitorism

The story of Chelsea Manning is extremely divisive and ties to many of my personal experiences and convictions; it is a story which really embodies the difficulty of the potential good and potential harm that whistle-blowing can do.

As someone with close ties to the military, I understand that a high level of responsibility comes with a secret clearance. While I have not fought myself, I have some understanding that war is messy and nasty and not always fair – those who have served would tell you that war is not fun, or something that they take pleasure in. I think everyone, on some level, understands this, and I don’t think that any of what she released necessarily shed a light on terrible atrocities or war crimes that the U.S. committed – rather, she released some, admittedly, disturbing videos and info of some operations, but the vast  majority of what she released was info on U.S. and allies’ operation and intel that was damaging to our soldiers’ security in theater. From the readings, what signaled to me that this was just a grab for attention was the actions of former hacker Adrian Lamo. He himself had contributed funds to Wikileaks in the past, and had communicated with other hackers who wanted to talk about their adventures, but he had never considered reporting anyone before; however, he believed that “Manning’s actions were genuinely dangerous to U.S. national security”, and even said that she irresponsibly was “basically trying to vacuum up as much classified information as he could, and just throwing it up into the air”. To me, this makes it clear that this was not an attempt to ethically expose corruption, but rather to hurt our country and our soldiers in harm’s way as a cry for attention.

At the same time, I do believe that there must be some level of accountability for the government. In my opinion, this should never be in the form of intelligence leaks, especially in times of war and especially regarding information that could affect the safety of Americans at home or abroad. At the same time, the government ultimately answers to the American people, and as such should strive to be as transparent as possible in regards to all operations – war, intel collection, etc. – in order to make the American public feel informed and “in the loop”, thereby (ideally) inspiring trust from the people in the government. Keeping secrets that are later exposed weakens the relationship between people and their government and erodes the reciprocal trust that is necessary for our democracy. I do not believe that Chelsea Manning’s actions were the right way to enforce this accountability, but sometimes ethical whistle-blowing may be necessary.

I think that the core takeaway from this situation is that people on all levels of power/government need to do better. Manning could have done better by navigating the proper military channels with this information. Her superior officers could have done better by recognizing that she was struggling, and checking in with her not only for the sake of maintaining security of classified info but also to possibly help with some of her personal issues. People at even higher echelons could do better by taking better care of information and holding themselves and the forces to higher standards of accountability for their actions. Ultimately, whistle-blowing has its place in a society of integrity, but not at the expense of the safety of our people.

Reading 04: Some Thoughts and Experiences on Gender Disparity in STEM

Oh boy, few things get me riled up like misogyny (or discrimination of any kind), and how it manifests in our culture and workplaces today.

I would be willing to bet that all, or nearly all, of the women in our Ethics have encountered sexist comments, uncomfortable slights, or other “microaggressions”. Even though I grew up in a loving family, surrounded by good people, attending good schools, even I could list off several instances where I’ve been made to feel uncomfortable or “less than”, simply because I’m a girl – especially in STEM classes or positions. The stories shared in the “Why is Silicon Valley So Awful to Women” are, unfortunately, not unique, and I believe would be echoed by women of nearly all ages and in nearly all career fields; CS is certainly no exception.

This swings the other way too – when people are so afraid of saying the wrong thing that they use uncomfortable levels of political correctness. Just a few weekends ago, during an Air Force event, I was speaking with an officer (at least 8 years older than me, had far more experience and rank than I have) who kept referring to me and several of my classmates as “you guys”. Every single time he said it, which was at least 5 in the span of a few minutes, he would stop and apologize profusely. Needless to say, it was uncomfortable. Of course I appreciated his effort to make me feel included and equal in the conversation, but his over-emphasis on the fact that I was different from the group made it more obvious that I was somehow different or out of place.

Would it have been better for him to “bro out” and treat me like one of the guys? Maybe, but probably not. As one of my good friends at Notre Dame from Botswana put it: “Some people are so determined to show that they accept me that they single me out or make me feel different. Around [other people], I am different, but that doesn’t really make me different” – in other words, I want people to recognize and appreciate that women might have different strengths or experiences, but not let that create huge differences in how they work with or interact with men and women.

Now, this uncomfortable place we as women or other underrepresented groups are in has on its own given me some cool opportunities. When I was applying for technical internships last summer, I was told repeatedly that I was a shoo-in for many positions because I was a good applicant on my own, but so many tech companies are trying to diversify and hire more women. Do I enjoy benefits like these? Of course. But I think it would be better if we lived in a world where we didn’t have to try to level the playing field with extra opportunities later on because the field was level from the beginning.

The fact is, as a woman in STEM, and perhaps even more so as a woman choosing to go into the military, I am going to continue to have awkward interactions or slights throughout my career; to survive and to be productive, I’ll have to find a way to put up with them and focus on crushing the job at hand. Maybe it doesn’t help to dwell on these problems so much, because I think the only way to truly change these problems is to change our culture, the way we teach and talk to our children, the fundamental way that we understand and interact with one another – and that seems like an almost insurmountable task. The unfortunate reality is that we live in a male-dominated culture, one in which women are subconsciously seen as less capable, less worthy, less smart – and as much as I hope to see this change, I believe that until we all are ready to talk about that and work through it together, nothing will change.