Reading 14: Computer Science Education

The articles for this week brought up a lot of pros and cons for computer science education that I had never really considered. I think that it is a bit of an exaggeration to claim that coding is the new literacy, however it is undeniable that programming is seeping into fields that would surprise many who have not received a formal computer science education.  To that end, I agree with the article that also said that coding is not the new literacy. However, I feel like that article failed to consider the importance of the thinking style that coding can instill. A different article made that claim that computational thinking is what will be important in the next few decades, and this is also what I believe. I don’t see the necessity to start teaching the subtle difference between a for and while loop or static versus dynamic allocation. I think that it is more important that people be able to look at code, and while they may not be able to understand what every line does, they can quickly get an idea of what the code does. I think that basic exposure to coding and computational thinking should be required, but I do not see the need or benefit of mandating that everyone learn to program.

From what the articles said, the main argument against pushing computer science onto everyone is either that it isn’t necessary, its something that we shouldn’t be trying to sell the way we are, or that its not realistic. I think that there are some merit to these arguments, particularly the one that the way many are trying to sell computer science to young people is by claiming that computer science is easy to do and some kind of get-rich-quick scheme by using examples such as Bill Gates or Mark Zuckerberg.

These seems deceitful to me, and in the long run may turn many away from computer science when they are unable to pick it quickly and then tell younger friends and family that they too should avoid computer science. I think that comparisons like these hurt the chances for computer science education. Children are impatient and when they struggle they will quickly become disheartened and all but the most dedicated will give up.

When detractors say the the push is unrealistic, their main point is that getting enough teachers for all the public schools in America is no small task, and it is even harder to convince good computer science graduates to take a job as a public school teacher when they could be make much, much more money working for a tech company. This is a difficult problem to remedy. Offering to pay computer science teachers more than another public school teacher would make the teaching unions mad. They would want all teachers’ pay to increase, but this is not realistic for most states as the article that noted that $1.3B is not enough to get a computer science teacher in every public school in America.

I do not know enough about education to give any good ways to implement computer science education into public schools. I think that computational thinking should be a requirement, but how you make a class out of computational thinking without it turning into a terrible theory class or a straight up programming class is a mystery to me. I think that a good deal of problem solving would need to be in this new curriculum as well a some of the basics from courses like discrete math. I think it is hard for me to separate computational thinking from programming because I was not exposed to programming before college. Everything I have learned as been in the paradigm of getting a CS education.

I think that the Geek Gene is another interesting topic that the articles brought up. I used to think that with enough effort, people could learn anything that they put their mind to. I don’t want to say that I no longer think this is true, but I am certainly less confident that this is the case. I think that a certain amount of natural ability or interest is needed for someone to pick something up, but maybe its a case of nature vs nurture, I don’t know. If I had to pick a side though, I guess I would say that not everyone can learn to program, nor should it be the case that everyone needs to learn to program. I think that knowing programming can be compared to know a second language. I don’t think that it is the case that everyone learns to be fluent in a second language, just as I don’t believe that everyone needs to know how to program. However, it is hugely beneficial to have at least a basic understanding of some second language. You may not be able to speak it yourself or read or listen fluently, but if you can read basic vocabulary or pick out certain words someone speaks, you can get very far, and this level of knowledge if what I think we should strive for in pushing computer science to others.

Reading 13: Intellectual Property

The Digital Millennium Copyrights Act essentially says that circumventing digital rights management and reverse engineering is illegal. Some of the articles referenced fairly recent changes to the language that seem to allow some tampering for the purpose of after-market repair and personal tinkering, but from what I understood, this was more for cellphones and did not make the same exception for video game consoles or tablets.

I feel that companies do have ethical ground when they try and protect their intellectual property. They needed to invest money and time into developing their product, and it is only natural that they then desire to profit from their labor. This has been fairly simple and straightforward in the past. However, in the last few decades with software becoming prevalent in so many things, it has become harder to decide what constitutes fair use of some company’s IP. This is basically what all of the articles on patents and DRM talked about.

If the action is simply ripping a file off of a CD or the like, and the file is then used just for personal things, I feel like that should be allowed. The item has been payed for, and the company that produced it is not losing out on potential profit since the file is not being distributed. It is a tricky situation though, because as far as I know, besides products with software, there has never been an issue of partial ownership after buying a product. I suppose there is a difference between something that contains no software, like a mattress (although as I’m writing this I realize that there are some mattresses that can be adjusted, and thus likely contain some proprietary software) and something that is inherently tech like a cellphone or car.

The only analogy that I can think of when it comes to this issue of reverse engineering is when people try to recreate a “secret recipe” like Dr. Pepper, KFC, or Coca-Cola. As far as I know these practices are not illegal, and there isn’t really anything Dr. Pepper can do to stop you from experimenting to figure out their 23 flavors. If you did succeed and decided to take your imitation product to market, it doesn’t seem to me that there is anything that they can legally do to stop you, since plenty of grocery stores sell store brand sodas that imitate Coke or Dr. Pepper. For this reason, I don’t think the laws should be any different when it comes to tech and software. The reason that Dr. Pepper and Coke survive despite the hoard of imitators is that they are simply better than the competition. I think that the law should treat tech similarly, so if someone wants to challenge Sony or John Deere by selling or giving away music or tractors,the free market will decide the winner. If these companies have higher quality products than the imitators, then they should have nothing to fear. If the imitators pull ahead, Sony and John Deere may need to rethink some things like pricing or customer relations. So, for the same reasons that I think it is okay for a person to reverse engineer a recipe and then tell others or sell a competing product, I think that it should be ethical for a person to reverse engineer a piece of software and give it away or sell it. I also feel that it goes without saying that it is ethical for researchers or developers to tinker with software to find bugs or flaws, provided that if it is something like a security flaw, they first tell the creators, and not publish the flaw for everyone on the internet.

I still understand the argument that companies make when it comes to the use of their products. Software is a major part of almost every product on the market today, and software is not inexpensive to develop and make robust. However, personally, I don’t like the claims they make about ownership of a legally purchased product. After reading the articles, I have to agree that DMCA and licensing doesn’t seem that good. I think that ownership should wholly and completely transfer to the buyer, and he can do with his new product as he wills.

Reading 12: Self-Driving Cars

The motivation for developing self-driving cars is two-fold. First, tech companies believe that it could help reduce the 35,000 traffic deaths each year that are blamed on driver error. It could also result in huge savings in transportation and shipping industries as it would reduce the number of drivers needed by a company (though this leads to the questions presented last week). As we discussed last week however, taking work from professional drivers would put many people out of a job as “driver” is one of the biggest occupations in America. Also, there are those that contend that autonomous vehicles won’t necessarily make the roads safer. They look at events, like what happened to the woman in Tempe, Arizona, and use them as examples of why we need to slow the advancement of AV to allow for better regulation.

As the articles stated, the current social dilemma facing us in terms of autonomous vehicles is that cars can injure people, both pedestrians and passengers. However, with AV will no longer be a person controlling the car, instead it will be an AI. Where do we lay the blame when an autonomous vehicle crashes and kills someone? The articles asked if it should be the owner, or the car manufacturer. Perhaps it should be the programmer that wrote the AI controlling the vehicle. Taking this further, how should we have AV choose what to do in a trolley problem situation?

At some point, we need to let the car AI decide who lives and who dies, or at least who is should try to save. Some people are queasy at the idea of letting AI make moral, life-and-death decisions. The articles explored the possibility of programming into the machine a type of utilitarianism. If presented with the option of hitting a group of 3 or 4 pedestrians and ensuring the safety of the single passenger vs swerving and harming that passenger only, proponents of the utilitarian approach would say that the AI should choose the latter every time. This presents a multitude of other problems though, such as how should it weigh the value of a life and overall happiness if it needs to decide between an elderly person and a young child? Those in the AV industry do not like the utilitarian approach, as in most scenarios the car would need to choose to harm its passenger, and most people would not want to get into a vehicle that may choose to kill them over someone else. On the flip side though, few pedestrians would be comfortable knowing that the autonomous vehicle driving down the street would choose to hit them to perverse the passenger instead of hitting a tree.

When it comes down to it, one of the articles mentioned a study that found that people wanted the car to prioritize saving pedestrians when said person was in a pedestrians position, but would want the car to preserve the passenger if said person was a passenger in an autonomous vehicle. I am not sure what the right answer is, as I think most people are like this; it is, after all, a basic instinct to protect oneself over a stranger.

I think that to some degree, government needs to play a role in regulation, at least to lay out some situations and dictate who is liable in each. If an AV crash occurs due to some sensor malfunctioning, it would likely be the fault of the manufacturer. If the software misidentifies something, the blame would likely be placed on the company that wrote the AI. Beyond this, I don’t think there is need for government intervention.

As mentioned above, one of the social and economic impacts will be many people removed from work. This is unfortunate, and I think that there should be something that companies do to help out employees that will be affected by AV, but I do not think that we should inhibit or stop the development of autonomous vehicles simply because a group, albeit a significant group, will be negatively impacted. I think that companies could take the new profits that they will be getting from the use of AV and using it to help these employees get a degree from a community college, or something like that.

Personally, unless I could get a self-driving car for the same price as a regular car, I do not think that in their current state I would want a self-driving car. Not for any moral or ethical reasons, but simply because there are many limitations to the technology. As one of the articles stated, self-driving cars do not/cannot perform well in rain or snow, in dusty or foggy conditions, in areas where the road markings are worn down, etc. I just do not thing that there are enough areas in the United States that a free of these sets backs yet to make them anything more than a fun gimmick.

Reading 11: Intelligence, Automation

The readings presented two different ways to view artificial intelligence. The first saw AI as a “machine brain” that would surpass a human brain. The Notre Dame Magazine article called this the “singularity,” and herald the machine apocalypse. The second AI view was that machines can be taught to do a few things very well, better than any human in fact. However, these computers are really bad at everything else, so to us the word “intelligence” to describe these sorts of algorithms is a misnomer.

I am in the second camp. We have demonstrated that we can “teach” machines to do difficult tasks like play chess or an Atari game better than any human but these AIs can’t do things like identify images though like others can. While I suppose that someone could the best of the specialized AIs into one super AI that can do a lot of things well, but I still do not think that this would qualify it to be comparable to human intelligence. As a few of the articles pointed out, many image recognition algorithms can be duped into thinking a picture of television static is actually a picture of a cat. Maybe this just indicates that there is still room for growth in computer vision, but I think that there are human qualities that cannot be captured by a machine, not necessarily vision, but compassion or curiosity as examples.

One of the articles claimed that AlphaGo is not an AI because it lacks many of the qualities that the author believes are required to qualify something as intelligent. I found his argument interesting and, for the most part, convincing. I don’t know that I would go so far as to claim that AlphaGO and AlphaZero are not artificial intelligence, but I do think that these examples of sophisticated programs that can play board games indicate that artificial general intelligence is on the way.

Watson is a more interesting example, as this AI demonstrated an ability to understand trick, nuanced questions and more often than not, correctly answer them. The articles that looked at the Watson case acknowledged that the sort of natural language processing and ability to access huge repositories of information incredibly quickly would lend itself rather easily to other domains such as medicine. Still though, I do not think that Watson is the herald of AGI. Referring back to the AlphaGO article, there is a lot that needs to go into “intelligence,” such as the ability to interact with the world and a demonstration of curiosity.

Before reading about the Chinese Room problem, I did think that the Turing test was an adequate test of intelligence. The test demonstrates a machine’s capacity to carry a human conversation without the human becoming aware the machine is a machine. The Chinese Room counter-argument is that the an AI may be able to accept the natural language input and spit out valid output, but by simply translating the input into something it could understand and then following the internal rules to generate acceptable output. The Chinese Room argument argues that this is no different than if a person was alone in the room pretending to be the AI. He or she would accept natural Chinese language, translate it into something that they could understand, generate acceptable output and then translate it back to Chinese and send it as output.

Proponents of the Chinese Room claim that since the person in the room cannot understand Chinese and better than a computer can truly “understand” natural language. The machine is simply translating the input and producing the output blindly following the rules set in place. When I was reading this article, I found the argument quite convincing, but after some thought, I feel like the whole in the argument is the definition of “understanding.” The human may not be able to understand Chinese, but once they translate it into some other language, they can. Similarly, the machine may not be able to understand the English input, but it has rules, like human grammar rules, that dictate how to translate the natural language into code that it can comprehend. I don’t know that the Turing test is a conclusive test of intelligence, but it is certainly large step towards it.

Personally, I am not worried about AI taking over our lives. I think that there are a lot of domains that AI could augment in out lives, but I do not think that there is a worry about AGI taking over. I am convinced by the skeptics that see these AIs that can do spectacular things like play Chess, Go, or play Jeopardy as simply specialized machines. They may be able to perform great feats in the design domain, but I don’t see that translating to a general intelligence anytime soon.

Some of the articles that talked about the mind referenced that many experts say that the brain can be reduced down to a biological computer. There is no “vital spark,” as one article put it, that separates a mechanical computer from our biological ones. I think that computers can be taught many of the things that humans can, but there are many examples of how humans learn differently, or know or understand things that cannot be taught to machines in a way that we know of yet. Morality is one such thing. I do not think that there is any way currently to give a machine morality, because between two people, their morality can vary radically. Because humans cannot agree on what constitutes a moral or immoral action, I do not see an AI having morality.

Reading 10: Fake News, Anonymity

Trolling is an online activity where a person or group participate in activities varying from banter to harassment targeted towards some other person or group. In the articles we were to read though, the writers only considered extreme forms of trolling such as harassment. Trolling almost exclusively occurs on anonymity-friendly platforms, and most of the articles that were on the topic of trolling assume that the trolls were upset or angered by something someone wrote online. While I agree that this may be the start of trolling, and the way in which someone gets targeted, but after sometime of being trolled, I think that person becomes a target for trolls who just want to troll, and do not care about what the “moral” fight is. Likewise, I think that there are a fair number of trolls that simply like to get a rise out of someone, so they will find a vocal, passionate group, and try to get into a “flame” war with them. This is something that, as far as I read, none of the articles really considered, but I think is something that needs to be figured out before we see the, as one article put it, “death of online trolls.” Cyberbulling is similar to trolling, but involves minors and the bully and bullied typically know each other in real life. The main difference between cyber bullying and normal bullying, is that, since cyber takes place on the Internet, the actions tend to be more permanent and witnessed by a larger audience.

I think that companies do have some obligation to fight online harassment. To at least some extent, companies need to be aware of, and care about, what is going on in their platform. I do not think that they should necessarily be held responsible for what people do on it, it is in their best interest to try to keep things civil. This may not be a great analogy, but you could kind of compare an online platform like Twitter to a bar. The owner of that bar likely does not want any illegal activity going on on the property, so he or she would kick out those that are found participating in anything illegal at the bar. Similarly, if a fight breaks out the owner should not tolerate it as the fight could damage his or her property, and a bar known for having fights break out is unlikely to attract customers. A website that is a haven for harassment is unlikely to attract new traffic, and the people that are on the site could cause the website to get into legal trouble, thus tech companies should do some suppression of trolling.

That being said though, I think that current efforts are about as good as it can get without drastic changes to how many of the problem experiencing platforms function. One of the articles addressed what I see as the biggest problem in combating trolls, and that is the ease with which a troll can make a new account. Platforms like Twitter or Facebook allow people to create accounts for free and does not require that they supply any real personal information. The only way to stop trolls from creating throw-away accounts would be to make them supply something that they cannot just make up, like phone number or a credit card number. Once the number has been used on an account, it may not be used again, effectively only allowing a person only a few accounts. Requiring users hand over information like that to use a free platform is unlikely to go over well though, so unless a better solution is proposed or people start to value control over freedom and anonymity, I don’t think there is much more that tech companies can do beyond what they are currently doing.

I think Gamergate was an absurd occurrence. I don’t have a lot to say on the matter, but I think the couple of articles did a good job of highlighting the key points. Regarding the question of anonymous behavior, I think that it is something that we just have to deal with. Anonymity coupled with the level of separation that comes from online platforms seems to remove a lot of the inhibitions that people would have in a public setting, whether its that the feel free to post something dumb like a meme, or something awful like harassment. I don’t see how you can easily separate things without people hurting the company or making people feel attacked for the bad behavior of a few.

Personally, I don’t think that cyberbullying should be a problem. but I am amazed that it has become a problem. I think that people should have some right to be protected from harassment, but today, I think that everything is “hate speech” to someone and that is the root of the problem. I am not saying that there are not cases of online harassment because there are, as shown in many of the articles. One of the articles made the distinction between online bullying, when both the bully and bullied are minors, and online harassment, when one of the two is an adult. I think that serious cases of harassment should not be tolerated, but when it comes to cyberbullying, I don’t take such a hard stance because I feel that kids are kids, and kids fight. Like traditional bullying, there are lines, that if crossed, elevate the seriousness of the bullying. But in most cases, I think that cyberbullying is less serious than cyber-harassment.

In the case of cyberbullying, the traditional recourse offered to the victim is tell an adult and then ignore it. Critics of this advice, one of the article authors was such a critic, claim that it is bad advice because children “need to be online to do homework” or other such things. I find this critique ridiculous because, while it is true that in many schools, homework is shifting to be online, I do not think any school is assigning homework over Facebook, Twitter, or some online game’s chat. I have yet to hear a valid argument as to why a child needs to be on social media, so until I do, in the case of cyberbullying, I think that the best course of action is the traditional advice.

Similar to harassment, people also make a lot of noise about internet trolling being a problem. I believe that there is a difference between trolling and harassment, and in a lot of cases, the trolls do cross that line and become harassers. Most trolls though, do no such thing and stick to “clean” (non-harassing) inflammatory posts. When I think of a troll, I think of someone that joins a flat-earth group and dumps posts about how the world is round, or someone that types “kys” in an online game. I don’t see these actions as particularly harmful. Some people may take offense at the latter, and it certainly is offensive, but I don’t see it as causing any real or lasting harm. If someone in real life walked up to you and told you to “eat sh*t and die,” without a doubt you would be offended and likely upset that a stranger would be rude enough to say something like that to you, but I don’t think that this would cause you harm. I think that most people handle trolls well, and generally that means just ignoring it. In one of the articles the author wrote about how she reached out to one of her trolls asking why and this actually got him to apologize and stop trolling altogether.

Real name services have a place, but I feel like few platforms should force users to supply real information because, as I mentioned a few paragraphs earlier, there aren’t many good ways of checking that the information is real. To be honest, I do not know of any service that enforces real-name information. As far as I know, sites like Facebook and Twitter ask for your real name, but they do not have a way of verifying it. Its not that I would not use real-name services, but that I do not know of any that I would need to use. I do not think that real-name services are harmful, for a platform like LinkedIn they make sense. I would argue that for Facebook and possibly Twitter it also makes sense.

I don’t feel strongly either way about anonymity. There is a time and place for protecting your identity, and sharing it online. For example, if I am playing a game online, I don’t really want other players to know my real name. If I was searching for a job on LinkedIn, I would want to share my identity. I guess, if I had to pick a side I would choose anonymity over full disclosure of identity. I think that a lot of good can come out of having the inhibitions of identity sharing taken away, but the other side of that coin is the bad. I think that the good does outweigh the bad most of them time, its just that the bad tends to be very obvious and loud.

Reading 09: Freedom of Speech

 

Net neutrality is the idea that the Internet should be a level playing field for any and all who want to use it; no one’s bytes should have priority over someone else’s. The arguments for and against net neutrality that the articles presented mostly addressed hypothetical pros and cons. In the IEEE article it was admitted that there is not much data on the topic, so any pros and cons are speculative. On the pro-net neutrality side, the argument is basically, “the Internet should be fair for all, therefore we need to put rules and regulations in place to preempt any potential for abuse by Internet service providers.” On the flip side, anti-net neutrality proponents claim that “regulations hurt the free market, businesses, and stifle growth, so let’s let companies do their thing and only interfere if we need to.”

In all honesty, I am not completely sure where I stand on the issue. Both sides have compelling arguments, but neither has any data to back up the argument, and I think that is why I am hesitant to pledge support to either side. On on the pro side, I think it is only right to keep everyone equal and let all types of legal content be accessed without preference. However, up until the Obama administration, there wasn’t really any heavy regulation, and during this time there weren’t any cases of ISPs abusing their power as far as I know. Free market competition is what fuels American ideas and innovation, and I think there is something to be said about allowing companies to do things like, as a couple article’s mentioned, AT&T allowing consumers to stream DirecTV Now without it counting towards their data limit. Benefits such as those are what fuel competition and force companies to innovate, so I do not see why the IEEE article seemed to portray it as a negative.

I guess, if net neutrality had not been repealed I’d be be pro-net neutrality, as it would be the status quo. However, we now live in a world without net neutrality. I say we accept it and see how it goes. So long as ISPs are not participating in any shady business practices, I don’t see the need to crack down hard with regulation. I do not think that it’s the case that net neutrality is undesirable, because to us consumers, it is very desirable. Fairness and equality is something that most Americans understand and very deeply desire. Instead, I think the net neutrality regulations fall into the unnecessary category. As one of the articles mentioned, the internet was running smoothly for over 15 years without much need for government regulation. It may be that in the world of today this example from the past does not hold, in which case some company will do something that people deem unfair or illegal, it will be taken to court, and we will have the first regulation that arose from necessity. Something like the AT&T example I think is fine. A company offering a service as “free” is not a cause for alarm to me. If instead, AT&T had done something like double the data usage for competing services such as Netflix or YouTube but kept DirecTV at the standard rate, that would be something that should be scrutinized.

I think that internet is a public service and that Americans should have fair access to it, but I hesitate to leap to the conclusion that it should be a basic right. I think that this is a topic that needs to be seriously considered by our government however, but not until we have congressmen and women who can show basic comprehension of the Internet and the services that it provides. When it comes to what role the government has in regulating a level playing field on the internet, I would say they there is certainly a role that the government needs to play. The unbridled free market seems to promote shady business practices since it rewards only those that succeed and crushes those that fail. This leads to companies feeling the need to take any action necessary to win, and ultimately to government intervention. I think that the anti-trust laws we have in place are a good first line of defense against any abuses ISPs could perform due to the removal of net neutrality, but without a doubt, our government will need to keep a close eye on them and be willing to intervene should anything with a “bad smell” happen.

Reading 08: Corporate Conscience

Corporate Personhood is the notion that corporations are entitled to many of the rights the individual citizens have such as freedom of speech and religion. The Comsumerist article referenced many of the legal cases that fostered this idea, and how they shaped the current landscape around this issue. Legally, cases dating back to the late 1800s have afforded corporations rights that average citizens hold, mainly those regarding criminal and civil liability. More recently, the terrain has shifted to include freedoms such as religion, speech, and the ability to spend money to support political figures. Corporations have been withheld the 5th amendment right to abstain from self-incrimination, but all of this combined has opened the floodgates for corporations to “pick-and-choose” what rights they should and should not have and start a legal battle around the issue.

From a social and ethical standpoint, I don’t think that corporations should be allowed to publicly take political or ethical positions, nor should they be allowed to pour money into political candidates’ campaigns. However, like the Atlantic article mentioned, championing the potion that corporations should not have a personhood implicitly supports the argument that corporations’ only interest should be making as much money as possible for shareholders. This too is not good. I think that its important that corporations behave ethically, and they should not be encouraged to pursue monetary gain above all else. This means that they need to be allowed to have some sort of “corporate beliefs”  or values that guide their actions, and sometimes, following these values could result in less profit. The Atlantic article seemed to advocate corporate personhood simply because not doing so supports companies making the most money. I don’t think that this is a good argument, but then, there doesn’t really seem to be any great or well-backed ethical or social position to take.


In the Sony rootkit case, I think that what Sony did was unethical, not because DRM is wrong, but because Sony did not outline what its DRM software was doing, and because the software they employed opened up every computer that installed the software to 3rd-party attacks. I think that protecting intellectual property is important, but in this case was taken too far. The Technolgy Review article summed up the entire situation well. Sony was within its right to try and protect the sales of the CDs and the artists’ music. The lengths they went to though were unethical. They were required to state the software they were installing on machines that played their CDs, and they did not. Furthermore, the software was contacting Sony each time it played a CD, exposing information about the user, mainly their IP address. The rootkit that they installed to hide their DRM software also had the potential to give malicious hackers the ability to hide other software on these systems. Then, when what they were doing was brought to public light they tried to downplay it and basically refused to apologize for putting people at risk.

One of the articles mentioned that the settlements that Sony reached with the lawsuits following this case involved paying those affected less than $10, and letting them download a few CDs for free. I suppose that this cost Sony millions of dollars between the actual money paid out and the revenue from the free downloads that might have otherwise been sales. Given that it was mid-2000s, and that, I imagine,  many of the Sony executives did not have a technical understanding of the DRM software they were deploying, I think that they were appropriately punished. There clearly was some negligence or incompetence involved between them not understanding all that the software First4Internet was making for them, and then also not asking for a full and detailed explanation of what the software could do beyond the DRM specifications, but in retrospect, and as someone who was not really affected by this (in 2006 I was 10), I think that the punishment fit the crime.


I would agree that corporations should be held to some moral and ethical code if we are granting them personhood. However, I do not necessarily think that they should be held to the same code that an individual person is. For example, it could be argued that it is ethical for someone to protect a loved one who did something immoral. Corporations should not be expected to do this, nor should they be allowed to, as companies do not really have “loved ones,” unless you count high-level executives, and if you do, then companies should certainly not be allowed to protect executives if said exec did something immoral. In the context of the Sony rootkit case, I think it is clear that Sony was held to a standard. They essentially sold a rootkit that could allow hackers to compromise personal computers, and they failed to state in their EULA that they were installing such software or that the software could secretly send data to Sony servers. If an individual installed rootkits on millions of machines under the guise that he was providing some service, say remotely servicing their computer, this person would likely be fined and go to jail if he or she was knowingly installing the rootkit. Sony was certainly fined, but as far as I know, none of the SonyyBMG execs went to jail. I think this is largely due to them not having a technical knowledge of their software, after all, they didn’t write it in-house, but had a separate company do it for them. Sony was still held responsible, but to a slightly different standard than any individual would have been.

Reading 07: Pervasive Computing

As all of the ad readings went over, the biggest ethical issue with advertising in today’s information age is the highly targeted advertising companies are able to do, and how they are able to do it. Since nearly everything is connected to the internet nowadays, you are unable to do anything without that action being a data point in the hidden profile kept on you by Google, FaceBook, and all other companies that buy and sell such data. Even companies that you may consider “good guys,” like your bank or employer are likely selling data that you are lumped into, and often without your knowledge. But, they usually can claim they have your consent, because after all, you agreed to the terms of service.

I think that the problem with this is twofold. First, that the collection seems surreptitious, and second, that people are not made aware when their data is being bought and sold. One of the articles mentioned the story that came up about a year ago that Google was tracking users’ locations, even when location services were explicitly set to off. This is a huge violation of trust. I do not know what Google’s response to this exposure was, whether they owned up to it, or tried to claim that it was for some ultimately benevolent purpose. There are other, less blatant collections that go on as well though such as, as I mentioned earlier, companies like banks collecting and selling information about their customers. Just because someone blindly agreed to such data usage by clicking the little checkbox leaves a bad taste in the mouth.

There are more ethical factors at play here than any one person could count, and frankly, its so overwhelming that it’s hard for me to give a decisive opinion on any of them. On one hand, users did click the “I agree” box when they made their account on “insert service name here.” So, they agreed to have their data collected and then bought or sold regardless of whether they actually read the agreement or not. Since every platform and every company is able to collect data on you, even parts of your life that you think are well separated are easy to piece back together. I would like to see data mining and targeted advertising cease, but that would require a huge paradigm shift in technology users around the globe, and short of something terrible, I don’t see anything in the future jolting people out of their ways. It’s too easy to just accept it and let companies have their way with our data.

I really like this quote from one of the articles: “So, the companies making the data-tracking tools have serious incentive to erode the idea of privacy not just because they can make (more) money, but because privacy erosion leads to more privacy erosion. The system is self-reinforcing. This is a problem.” It seems that people now expect their data to be used without their knowledge, and no one has any real reason to believe otherwise. Its not in any company’s best interest to behave differently. Honestly, I think that if you, even irregularly, use an internet connected device, you have lost a lot of your privacy, at least to tech and advertising companies. Chances are though, you still have a lot of privacy with regards to friends, family, and strangers you see walking down the street, so does it really matter? I don’t like this thought. It seems wrong to say that if there isn’t a significant impact to your personal life just live with it. I can’t really come up with good counter arguments and examples, however, so maybe there is some seed of truth to it not mattering in the grand scheme of things.

With regards to online ads and ad-blockers, simply put, I hate online advertising and use an ad-blocker with hardly any pangs of remorse. At times I do feel bad for preventing some website that I actually like and support from making its ad revenue, but for the most part, I think that ad-blocking is a safety precaution and necessary for any type of positive user experience on the Web. I think that the article that claimed that ad-blockers are a form of theft has some merit, because as the Kant article mentioned, if everyone on the internet blocked ads, most sites would die off, but there is too much fault with the ad sellers to accept such a position. If they had been stricter with what types of ads they published on websites and put less intrusive and distracting content in ad-space, the problem would never have arisen. However, since it is a problem today, I don’t think any change to ads in the future could change people who use ad-blockers; I certainly won’t stop using one.

Reading 06: Edward Snowden, Government Surveillance

When the Apple-FBI GovtOS thing was going on, I remember feeling very strongly that Apple was in the wrong and had no grounds to take the position that they did. Now that some time has passed and I’ve been given the chance to read articles supporting both sides, my convictions have lessened. However, I still think that Apple was wrong for not assisting with a terror case. The CEO made a fair point in his customer letter that if they made a weakened version of the operating system, they would undoubtedly be inundated with requests to use the software for other legal cases. I also applaud him for saying that he wants better legislature regarding topics like that, and that if a law was drafted that required he help he would immediately do so, though whether he really would have been true to his word is up for debate.

I think the flaw with his reasoning though is that there is clearly a major difference between a terrorist attack on American soil and most other crimes. I think that it would be fairly straight forward to refuse use of a GovtOS in any case save terror. I imagine that it is also more than feasible to restrict access to such software by keeping the device it is stored on off of any and all networks and severely limiting who has access to it to only a few individuals.

I think that it is possible for companies to ship devices and software that protects users’ data from inappropriate access by third-parties, but also allows for emergency access by employees with access to the physical hardware in dire circumstances. Encryption itself is something else to consider, and needs to be talked about more I feel given that quantum computing is becoming more of a reality. If current encryption standards are rendered obsolete, then protecting your data is an impossibility.

Personally, I feel that companies do have a responsibility to help law enforcement, particularly in regards to terror cases, and especially if said company stores or regularly touches its users’ data. As many of the articles mentioned, given the information age we live in with all of its mass sharing, I don’t find this unreasonable. For many companies, when we click the “I Agree” button on the terms of service agreement we allow our data to be used for many purposes worse than national security. I don’t see how people can be terrified that Big Brother may be watching them, and then happily hand their data over to to a greedy company.

I find it strange that there are people who happily allow their data to be harvested in the name of targeted advertisements recoil at the notion of data being used for security. I’ve had these arguments too many times, and at this point all I want to say is that there are trade-offs for everything, freedom and security included. If we want national security, we have to make sacrifices. I feel like, when presented with the choice of losing a small amount of privacy or allowing a terrorist attack, the answer is obvious.

Reading 05: Engineering Disasters, Whistleblowing

If you really need me to share since I likely have a view that isn’t the popular opinion, I can, otherwise I’d rather not.


The theme that all of these “whistleblower” articles have in common is that the leaker decided that they needed to take matters into their own hands and go to the media to resolve an issue they observed in their workplace. They did not use appropriate channels, resulting in damage to themselves.

I have done internships with the Department of Defense. They care a great deal about whistleblowing, and make a visible effort to educate their employees that there are official channels to go through if they have a concern. In all three cases, NASA, Boeing, and Manning, you have military personnel and government employees or contractors disregarding the legal options they have for sharing concerning information and instead choosing to go to the media. (I think that this is the case for the NASA one. From the articles it sounded like Boisjoly went to the media before going to the presidential commission.)

In none of these cases can the subject be considered a federal whistleblower. A federal whistleblower must have reason to believe that their employer violated some law or regulation, and then testify or commence a legal proceeding. Going to the media first does meet these requirements and in doing so, the leaker forfeits his right to be protected under any US whistleblower protection laws.

As you might have guessed, I have a very negative opinion of Chelsea Manning, and an even more negative opinion of WikiLeaks. I think that her sentence was deserved and that the commutation of her sentence was inappropriate. While she may have been genuinely concerned that there was something that needed to be brought to light, what she did was illegal, and with the embassy documents released, as the article mentioned, had the potential to put American lives at risk. There were channels that she easily could have gone through to legally make people aware, but she did not choose to utilize them.

All of these leaks involved people related to federal agencies or federal contractors (I know Boeing is a DoD contractor, but I could not find if the two employees in question were contractors themselves). There are many laws and regulations in place to protect true whistleblowers on the public side. There is not as much in the private sector to my knowledge. Whistleblowing, when done properly, is important to keeping companies honest and protecting consumers. I cannot fully speak to what exists in the private sector, but from the readings it seems like there is much more room for illegal retaliation from the employer. I’m not sure how that can be fixed, because on the one hand, as private companies they should not have the government controlling them. On the other hand, retaliating or discriminating against a true whistleblow is, and should be, illegal and the only way to resolve such a case is to take it to court.

I had trouble reading all of the articles related to Boeing, as many of them required that I have a subscription to their site. However, from what I did gather, the employees did try to bring their concerns to their supervisors. When this did not work, instead of elevating the issue like they should have, they went straight to the media. For this reason, I don’t think that they qualified for whistleblower protection. If they were being threatened by Boeing, I believe that that is grounds to go to the Department of Labor and produce the evidence of being threatened. Then they can continue the process by producing to the DOL and Office of the Inspector General, not the media, the poor security audits as evidence of why they are being threatened.