[HN Gopher] Meta cuts Responsible Innovation Team
___________________________________________________________________
Meta cuts Responsible Innovation Team
Author : cpeterso
Score : 352 points
Date : 2022-09-09 15:27 UTC (7 hours ago)
(HTM) web link (www.wsj.com)
(TXT) w3m dump (www.wsj.com)
| thenerdhead wrote:
| Not to sound too bleak, but isn't pandora's jar already opened as
| of many years ago? We can hardly perceive the societal harms
| social media has done given we still live in the era where it is
| the main metaphor. You cannot reform it once it's out in other
| words.
|
| > The team was made up of engineers as well as people with
| backgrounds in civil rights and ethics, and advised the company's
| product teams on "potential harms across a broad spectrum of
| societal issues and dilemmas
|
| I suppose the question is whether people are surprised that this
| team existed in the first place. It sounds like it adds lots of
| legal liability of knowing about certain problems and not doing
| anything about them in due time. I wonder what they ended up
| finding if they found anything at all not already known to the
| public.
| zx8080 wrote:
| It's like the FB itself is not actually harmful for society [0]
| /s.
|
| 0 -
| https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook...
| dimensionc132 wrote:
| BrainVirus wrote:
| It's like reading that BP disbands their team for renewable
| energy innovation. Am I supposed to be sad? I'm not. I don't
| care. It was all fake from the very beginning even if the team
| had many people who were naive enough to think otherwise.
|
| The point of such teams is not to improve ethics. The point is to
| have enough credibility to affect the discussion. (E.g. observe
| how much "research" in AI ethics has financial ties to large
| companies that are heavily invested in AI-based products at the
| time.)
|
| What a properly ethical corporation would do is openly admit
| conflicts of interests and listen to _external_ feedback.
| mgraczyk wrote:
| Not really, because renewables will actually make BP better and
| there's a legitimate case that such a team would improve long
| term shareholder value.
|
| This team at Facebook had a very low chance of doing anything
| good for the company.
| gapplebees wrote:
| hintymad wrote:
| I have a question about incentives in organizations. If you form
| a team to identify problems in your org, wouldn't the team keep
| finding more problems, no matter what, to justify the value of
| themselves? A DEI officer will find more injustice in a
| university so over the years U-M History Department had 2.3 DEI
| officers per staff member. Or Gebru's team had been finding more
| and more egregious behavior in Google AI.
|
| On the other hand, security teams are highly regarded in a
| company, even though they are supposed to identify
| security/privacy problems in the org as well. What made security
| different from those ethics teams?
| bombcar wrote:
| Security is provable in some way, as in "look you can exploit
| us I'm doing it right now" - many of the others are more
| nebulous.
| [deleted]
| cptcobalt wrote:
| I'm no fan of this. While I don't know what their value and
| impact was--and this could have led to the team's demise. It's
| also Meta--I doubt they were also fully empowered to ask the
| right questions and push for the right change.
|
| A company with this level of influence over the world should work
| to be self-calibrating: don't significantly manipulate people,
| block the spread of hatred, don't let unknown actors influence
| global politics, don't design your products in a manner that
| others can use them in ways you don't expect that cause people
| harm, etc. Does that still happen at Meta after this? I doubt it.
| itsoktocry wrote:
| > _While I don 't know what their value and impact was_
|
| Seems like that's an important piece of information to have
| before passing judgement on the decision, I think.
| cptcobalt wrote:
| Contextually, my `-and this could have led to the team's
| demise` was an attempt to add a fair qualifier to my
| statement, but probably missed the mark.
|
| I agree, but given the company, context, and reporting, the
| only people that would categorically know are Meta insiders,
| and journalists poking at them...not most casual HN
| commenters. :)
| bognition wrote:
| I realize this doesn't need to be repeated but companies have
| one mission and one mission only, to make more money. Anything
| that works against that is a distraction. Turns out spreading
| hate, manipulating global politics, and exploiting humans
| psychology is extremely lucrative.
|
| The problem isn't misguided or unregulated corporations the
| problem is capitalism.
| musesum wrote:
| I wonder if internal estoppel is a thing? Two options
|
| Option A 1) ethicist objects to something
| 2) management declines the objection 3) whistle blower
| reveals the declined objection 4) public outcry
| ensues, stock price falls 5) board members become
| agitated 6) c-suite wakes up to another gut punch
|
| Option B 1) ethicist raises an objection
| 2) management acts on the suggestion 3) revenue misses
| target 4) stock price falls 5) board
| members become agitated 6) C-Suite wakes up to another
| gut-punch
|
| Choose.
| vel0city wrote:
| It seems we can gain a lot of efficiency by having a different
| c-suite gut punched every morning as a part of the NYSE opening
| bell ceremony.
|
| Then we can just skip steps 1-5.
| usefulcat wrote:
| In Option A above, it seems like things mostly stop at 3 for
| FB. Public outcry? Falling stock price (as a result of public
| outcry)? I don't really see that happening very much.
| colinmhayes wrote:
| It's hard to say because the facebook hearings happened at
| the same time as the iPhone anti-tracking stuff that cost
| facebook billions but I think it had an effect.
| musesum wrote:
| Was thinking of Frances Haugen revealing her identity
| coinciding with the market drop [1]. I dunno; IANAB (Banker)
|
| [1] https://www.barrons.com/articles/facebook-stock-tech-
| apple-5....
| karmasimida wrote:
| It says much those teams are really just a welfare program in the
| showcase window.
|
| Good to see they get let go.
| neonate wrote:
| https://archive.ph/C34Y0
| [deleted]
| wpietri wrote:
| Well this part is very familiar:
|
| > its work was given prominence in the engineering operation by
| former Chief Technology Officer Michael Schroepfer, who announced
| last year that he was stepping down.
|
| At the end of 2016, I joined Twitter to lead one of the anti-
| abuse engineering teams. My boss, who led the effort, was great:
| loved Twitter, hated abuse, very smart. 6 months later the CTO,
| who had been behind the creation of the anti-abuse engineering
| effort, left. My boss, previously gung ho, left in a way that
| made me think he was getting signals from his boss. And shortly
| after that, said boss's boss said that our team had succeeded so
| well that there was no need for us. He scattered the engineers,
| laid of the managers, me included, and declared victory. We all
| laughed bitterly.
|
| What these have in common for me is a high-level executive
| launching a special team with great fanfare in a way that
| addresses a PR problem. But because PR problems are generally
| fleeting, as soon as do-goodery loses its executive sponsor,
| everybody goes right back to the short-term incentives, meaning
| things like "responsibility" go right out the door. At least
| beyond the level that will trigger another PR disaster.
|
| And if you're wondering why you don't hear more about things like
| this, you're not supposed to. At least for the managers laid off,
| it was a surprise meeting and then getting walked out the door.
| In the surprise meeting, they offered a fair chunk of money (for
| me, $40k) to sign an additional NDA plus non-disparagement
| agreement. I happen to have a low burn rate and good savings, so
| I didn't sign. But I know plenty of people who have looked at the
| mortgage and kid expenses versus the sudden lack of income and
| eventually signed.
| piva00 wrote:
| As someone who (after 15+ years of career) has been feeling
| alienated (or worse, mocked for) when bringing up ethical
| issues around tech I work or design I wanted to thank you for
| the integrity. It feels good to not be alone.
| drdec wrote:
| Out of curiosity, what do you feel you gained by not signing
| the NDA/non-disparagement that was worth so much to you?
| whatshisface wrote:
| We gained the story, they did it for the benefit of the
| community.
| jonas21 wrote:
| The ability to write that comment?
| jonny_eh wrote:
| And more importantly, not fearing being sued for
| accidentally saying the wrong thing to the wrong person.
| paganel wrote:
| And having a cleaner conscience down the road.
| BoorishBears wrote:
| Maybe I'm just the worst kind of person because I'd 100%
| sign that and still talk.
|
| I get that with tech salaries 40k is practically a
| relocation fee, but I see those agreements as being about
| as binding as an EULA. I'd revel in them coming after me
| and can't for the life of me imagine damages past the 40k
| they offered unless you straight up lie.
| [deleted]
| ipaddr wrote:
| Leaving that kind of money on the table to share a story
| is not a practical action that makes sense to me without
| a six figure book deal in place.
| misterprime wrote:
| Being able to share the content above when it is relevant has
| value, doesn't it? If the opinion is that this sort of
| corporate behavior is not healthy, and the person hopes to
| reduce or even eliminate that sort of behavior, then they
| need to be able to share that information freely with others.
|
| $40k is a lot of money to some people, and not significant to
| others. To that person, being able to share the single post
| above might have more more value than an extra $40k in their
| savings account.
| JumpCrisscross wrote:
| > _what do you feel you gained by not signing the NDA /non-
| disparagement that was worth so much to you?_
|
| I wouldn't sign that. Less, if I'm honest, out of any sense
| of duty, and more to maintain leverage. If that story is
| worth $40k to the firm it could be worth more to me, down the
| road, should my past employer and I find ourselves
| disagreeing.
| travisjungroth wrote:
| Sounds like blackmail. You're not whistleblowing or suing
| for damages, so how else are you getting leverage from a
| story in disagreement?
| JumpCrisscross wrote:
| > _how else are you getting leverage from a story in
| disagreement?_
|
| It's leverage held in reserve, not planned for immediate
| exploit. If we part ways and all is hunky dory, it sits
| stale. But if _e.g._ the firm gets bought by an asshole
| and he frivolously pursues ex employees for dumb reasons,
| _e.g._ deciding a non-compete covers everything from
| finance to gardening, I have something to fight back
| with. (One can similarly ask why the company needs non-
| disparagement protection.)
|
| Employers and former employees get in stupid tiffs all
| the time.
| [deleted]
| melony wrote:
| Blackmail's legality depends on whether monetary
| compensation is being demanded.
| jakelazaroff wrote:
| Leverage comes from having something that someone else
| cares about. You have the ability to talk about and
| disparage the company. The company wants you to not have
| that ability. They want this so much that they're willing
| to pay you _even if you don 't have immediate plans to do
| anything_. That's leverage.
|
| How is it blackmail if the company is the one offering
| the money?
| jrockway wrote:
| It's not blackmail. You're taking $40k in exchange for an
| unbounded number of problems. Maybe someday some exec
| will come up with "Oh, we used C++ at Twitter, so
| answering that question on StackOverflow is a violation
| of your NDA, please return the $40k immediately." Now
| you're on the hook for either $40k of court costs, or
| writing them a $40k check. (I don't think you get the
| $15k you paid in taxes, either.)
|
| For a few million dollars for a bounded period of time,
| sure, disconnect yourself from the Internet for that
| bounded period of time. For $40k, you're just taking on
| unnecessary problems that you don't actually have the
| resources to solve.
|
| (I was offered $0 to sign an NDA after leaving my last
| job. I did not sign it.)
| ipaddr wrote:
| Why would you use your real name on stackoverflow? Using
| your realname creates these imagined problems
| elliekelly wrote:
| It's not blackmail to want the opportunity to be able to
| tell the truth in the future. Jennette McCurdy recently
| wrote a memoir and mentions that Nickelodeon offered her
| $300k to sign an NDA about her time as a child actor
| there. She declined their offer. She didn't blackmail
| Nickelodeon but it was absolutely the right decision for
| her since she's more than made up the $300k on the sales
| of her book. And for the public, who got to read her
| excellent book.
| travisjungroth wrote:
| I get not signing NDAs in general because you want to be
| able to tell the truth. This whole thread is an example!
|
| It's the "if we end up disagreeing part" that seems like
| blackmail. And, hey, maybe that's what they're going for!
| Could say "yep, I want to be able to blackmail them if I
| need to." But the comment I replied to wasn't that
| explicit.
| [deleted]
| lostlogin wrote:
| It was worth $40k to the OP, but for Twitter it cost that
| multiplied by the number of engineers (minus OP).
|
| That silence was worth a lot to Twitter.
| JumpCrisscross wrote:
| > _That silence was worth a lot to Twitter_
|
| Totally agree. These agreements make sense. Just saying
| that if I were shown the door, I wouldn't sign.
| (Different question if I'm asked while happily employed.
| Would be more inclined in that case, provided it only
| applied through the date of signing and if it were
| mutual.)
| radicalbyte wrote:
| I was just about to say: stall them out, wait until
| everyone else signs, then bring up how if they're willing
| to spend $40k * number_of_employees to keep the silence
| you're making them a counter offer of $40k *
| number_of_employees.
| dfadsadsf wrote:
| I do not know about Twitter but two months salary
| (ballpark for $40k for manager) is my company standard
| offer in layoffs (I think it's actually 2 months + 2
| weeks for every year of service). You have to sign
| release+NDA to get it though. Twitter did not try to buy
| silence - they just offer money to everyone to keep good
| feelings (generous severance) and avoid petty litigation.
|
| Example of buying silence is $7M settlement with security
| guy and even that apparently did not work.
| wpietri wrote:
| Twitter absolutely was trying to buy silence. Per a
| suggestion from my lawyer, I offered to sign a version
| with everything except the non-disparagement clause. They
| said no.
|
| And Twitter very clearly did not give a shit about "good
| feelings". Otherwise they wouldn't have done it for us as
| surprise meetings and security guards walking us out
| after, with salary ending that day. If you want good
| feelings, the way you do it is giving people paid time to
| find a new job. It would also have been nice to be able
| to finish up my work and tidy up loose ends. E.g., I had
| an employee whose annual review I was in the middle of
| doing. She was stellar and she deserved to be properly
| celebrated, rather than some rando manager come in and
| try to guess after the fact.
| jacobr1 wrote:
| It isn't just silence. There are usually a set of things
| in a severance deal - like agreeing not to sue for
| wrongful termination or reconfirming existing NDA or
| exclusivity or other employment terms. When I've been on
| the managerial side of things removing the potential
| downside of any future lawsuits was the overriding
| concern.
| wpietri wrote:
| It's not what I gained. It's what I would have lost.
| [deleted]
| drdec wrote:
| What did you feel you would have lost?
|
| I can imagine answers, but I've never been in that
| situation. There are may be things I won't think of.
|
| This is why I'm asking - my first instinct is take the
| money so I'd like to fully understand the thinking of the
| opposite side.
|
| In any case thanks for replying.
| tgsovlerkhgsel wrote:
| The freedom to talk about something you spent a
| significant part of your life on?
| elliekelly wrote:
| Well they couldn't have made that comment, for one.
| wpietri wrote:
| There are two basic approaches to work. You can be a
| minion or a professional.
|
| If you're a minion, then you just build the volcano
| headquarters and the atomic missiles because, hey, it's
| your job. They don't pay you to think. Your job is to
| make your boss look good. You give 110% and think outside
| the box only when it's firmly inside the box of the
| primate dominance hierarchy you are embedded in.
|
| Professionals, though, explicitly recognize they are part
| of a society. They owe something to the client/employer,
| but also to the public and the profession. For example,
| read through the preamble here:
| https://ethics.acm.org/code-of-ethics/software-
| engineering-c...
|
| I see myself as a professional. I sell my labor, not my
| ethics. So what I would have lost by selling my ability
| to speak out about a problem? My integrity. My
| independence. My freedom. $40k is a lot of money, but it
| wasn't worth 40 years of shutting up about anything
| Twitter would want me to keep quiet about.
| orzig wrote:
| You are a legit hero. Our society focuses exclusively on
| contributions that involve some dramatic moment, and
| doesn't recognize those who show up for what's right
| every day.
|
| Thank you, seriously.
| techdragon wrote:
| This is the same sort of worldview regarding professional
| ethics that led me to make my first discussion with any
| client _free_. It's unethical (and bad business) to ask
| customers to pay my rates in order to explain their
| problems to me, it would be taking advantage of them to
| spend an hour or two, or even three, work out they're
| either not a customer I want, or that I haven't got the
| experience with the technology they are using, or that
| yes I could help them but their budget wouldn't cover the
| time needed at the rate I charge... etc.
|
| I borrowed this from the way quite a few lawyers work.
| Its served me well and garnered quite a bit of good will
| over the years, but at the end of the day I do it
| primarily because I sleep better knowing I'm not ripping
| people off and taking advantage of them.
| wpietri wrote:
| For sure! I have no tattoos and probably never will. But
| "value for value" has such meaning for me that it would
| sure be a candidate.
| criddell wrote:
| I'm always ripping on people using the title _Engineer_
| when talking about software development. Your comment is
| a perfect example of how I would expect an Engineer to
| think about their work. I love it.
| ethbr0 wrote:
| Kudos to you, and glad you took the time to speak about
| your thoughts here. It's good for people to have exposure
| to later career challenges... and especially options they
| might not consider.
| matheusmoreira wrote:
| I have huge respect for you. 40k USD is a ton of money
| where I live due to the exchange rate.
| twistedfred87 wrote:
| This is so incredibly refreshing to see. I can't express
| just how much I appreciate and respect this.
| mellavora wrote:
| Hero points.
|
| Thanks also for explicitly spelling out what it means to
| be a professional. (duty to profession and society as
| well as employer, written source, summary "sell my labor
| not my ethics")
| PuppyTailWags wrote:
| After a certain amount of money to cover life expenses +
| recreation, additional money has diminishing returns.
| Once one reaches that point, leading a fulfilling life
| becomes the next highest priority. I would speculate the
| kind of person who joins an anti-abuse team would not
| find accepting hush-money about the lack of anti-abuse
| systems to be something they would find aligning with
| their vision of a fulfilling life.
| asiachick wrote:
| What is that number for you? (and others)
|
| For me it's pretty dang high. Like on the order of 5-20
| million USD in the bank. At some point I'm going to stop
| working. Rent and life expenses where I live currently
| are probably $100k a year (yes I could live on less or
| move, but that's part of the point).
|
| Let's say I stop working at 65 and I live to 85. That
| means I need at least 2 million dollars in the bank to
| keep spending $100k a year and it assumes I die at 85. If
| I happen to live to 95 or 105 I'd be S.O.L. Also add in
| escalting medical expenses, inflation, other issues and 5
| million in the bank is IMO the minimum I'd need to feel I
| could discard other money and stop worrying about it.
|
| And that assumes I stop working at 65. If I was trying to
| stop earlier that would go up. I get at some I could
| theoretically live off the interest.
|
| My point is, at least for me
|
| > a certain amount of money to cover life expenses +
| recreation, additional money has diminishing returns.
|
| Is generally false. It's usually part of the "if you make
| $75k a year, more won't make you happier" but IMO that's
| not true because I'm screwed if that $75k a year stops.
|
| Also, source of that point is often misquoted. It says
| happiness doesn't increase with more money. But life
| satisfaction does increase forever with more money.
| Here's an article pointing out the original study did say
| satisfaction increased as well as a new study that says
| happiness increase too.
|
| https://www.forbes.com/sites/alexledsom/2021/02/07/new-
| study...
|
| If I had even more money I'd angel invest. I think that
| would be pretty fulfilling. If I had even more money
| there's all kinds of projects I'd like to fund. I expect
| I'd be pretty proud to fund them.
| PuppyTailWags wrote:
| I never said more money doesn't make you happier. I said
| more money has _diminishing returns_ , and _other things
| become more important_. Even you are only able to suggest
| things that bring you personal fulfillment as a way you
| can use more money. This actually supports my point that
| it makes sense for someone to decline money that doesn 't
| bring them fulfillment if fulfillment is what they're
| going for.
| asiachick wrote:
| Maybe, but as someone that turned down $6 million because
| of my conscience (sold stock solely because I didn't like
| feeling guilty holding it and it's gone up 4x-6x since
| then), I could do a ton of good things for others if I
| had that $6 million.
|
| It's not like we're talking about killing babies. Where'
| talking about signing an NDA or in my case not being 100%
| on board with a company's behavior in ways that are
| arguably ambiguous. As an example, if I had New York
| Times stock and was upset at their hypocrosy of
| complaining about ad supported companies while themselves
| being an ad supported company. Whether ads on NYT are
| good or bad is an debatable point. The point being,
| nothing the company who's stock I was holding was
| unambigously evil. But I chose my conscience over
| arguably trivial issues. In this particular case I think
| it was a mistake. If the company had been truely evil (by
| my definition of evil) then I'd be more okay with my
| decision.
| woobar wrote:
| I am sorry, but no, you did NOT turn down $6M. You have
| made an investment decision that cost you imaginary
| gains. This is a big difference from turning down a $6M
| paycheck
| tayo42 wrote:
| But nothing really came out of it? It's like twitters
| reputation got ruined or there was some repurcusion for
| the company freely talking about it
| AshamedCaptain wrote:
| What repercussion to expect? TFA is literally about
| Facebook doing the same thing, today.
|
| Don't see many pitchforks...
| tayo42 wrote:
| Yeah that was my point, the thread was about being
| offered 40k to be quiet, but the thing I'm wondering is
| be quiet about what? Might as well take the 40k if there
| was nothing actually scandalous going on
| PuppyTailWags wrote:
| This implies that the poster is the kind of person who
| doesn't find fulfillment in making choices that align
| with their ethics system unless the world also aligns
| with their ethics system. Like I said before, I don't
| think someone who would join an anti-abuse team follows
| this behavior pattern.
|
| Additionally, the belief in one's own moral choices and
| behavior is often one of the important steps in finding
| fulfillment in yourself. If your fulfillment is reliant
| on external validation, you will always be found wanting.
| didibus wrote:
| It's interesting that people value free speech so much,
| yet seem to have no quarrel with being allowed to sell it
| away.
| yamtaddle wrote:
| I _suspect_ you 're looking at it from either a
| nihilistic "let the world burn, I just want to get mine"
| perspective, or (probably more likely?) a perspective
| heavily influenced by consequentialist ethics ("is the
| good I can do by being able to talk about these things
| worth more, in a net-good-in-the-world kind of way, than
| $40K? I'm not sure enough that it is, to justify turning
| down the money"). Or maybe a blend of the two (most folks
| get a _bit_ selfish and let their ethics slide at least
| some, when sufficiently-large dollar values start to get
| thrown around, after all)
|
| There are other perspectives, though. Here's a starting
| point for one of the big categories of thinking that
| might lead one to turn down $40K essentially just _on
| principle_ :
|
| https://en.wikipedia.org/wiki/Virtue_ethics
| elliekelly wrote:
| > And shortly after that, said boss's boss said that our team
| had succeeded so well that there was no need for us.
|
| This is infuriating. The boss's boss knew this wasn't true. You
| knew it wasn't true. Why lie? Especially with such an obvious
| and unconvincing lie? Twitter is so addicted to misinformation
| they're even spreading it internally and off-platform.
| wpietri wrote:
| I presume because there were enough people who either liked
| him or found that view convenient that nobody who mattered
| was going to double-check his words.
|
| Which is what happens all the time in power hierarchies. Some
| people enjoy that bad thing are happening, more are
| indifferent, and most of the rest are kept too busy to really
| think about it. Or too distant from power for any thinking to
| lead to action. E.g., America's history with slavery.
|
| But yes, if you are paying attention, it's fucking
| infuriating.
| 650REDHAIR wrote:
| I've done the opposite of you and regretted it. Good for you
| for sticking to your principals.
|
| I appreciate your $40k comment here!
| wpietri wrote:
| No shame! I was lucky in all sorts of ways. One is that years
| of consulting work left me with habits of _always_ having
| enough of an emergency fund that I could afford the time off.
| Another is living in a moment where my weird brain quirks
| made me a highly paid professional rather than that annoying
| car mechanic who might solve a hard diagnostic problem
| quickly but takes forever to get around to your oil change.
| And perhaps most importantly, early on I had a couple of
| dubious jobs that made me really think about my ethical
| standards for work.
|
| I'm glad to hear you learned your lesson!
| patcon wrote:
| > my weird brain quirks made me a highly paid professional
| rather than that annoying car mechanic who might solve a
| hard diagnostic problem quickly but takes forever to get
| around to your oil change
|
| Thank you so much for being humble about your skills. I
| wish more of us took this perspective
| ak217 wrote:
| > a highly paid professional rather than that annoying car
| mechanic who might solve a hard diagnostic problem quickly
| but takes forever to get around to your oil change.
|
| There are so many of us who can relate. Huge kudos to you
| for your integrity and for staying humble.
| lumost wrote:
| It's really hard to find a good metric for "Keeping a social
| network healthy". Most of the changes introduced by a
| responsibility team will inherently hurt short term metrics or
| block others from making a change.
|
| I wonder if there is room for a tool which tracked the rate at
| which users interact with negative or psychologically high risk
| content on a user generated site.
| [deleted]
| Waterluvian wrote:
| Thank you for sharing this. An expensive comment but one worth
| making.
| bhaney wrote:
| I don't get a chance to read many comments that cost $40k to
| make. Thanks.
| darth_avocado wrote:
| $40k is two months pay probably.
|
| Edit: To be clear I didn't say this to shame the OP. I said
| this to highlight that the company didn't offer up much. I
| say this because I have been in a similar position in the
| past where their "generous" offer was two months severance
| (same ballpark) which imo was insulting. Didn't sign it
| either and told them to F off.
| kwertyoowiyop wrote:
| How many month's salary would the writer need to have lost
| before you unlock some praise?
| version_five wrote:
| I'd look at it a little differently. The 40k puts him on
| the hook for something. He (or someone who signs sn
| agreement like that) is in a position where something he
| says could be construed as violating the agreement and he
| could get taken to court, even if he's in the right. No
| agreement, nothing to worry about. Why take an immaterial
| amount of money to be personally liable for something.
| Getting to tell the story is a minor perk
| formerly_proven wrote:
| > Why take an immaterial amount of money
|
| k
| munk-a wrote:
| I think this somewhat speaks to the pay gap present on
| this platform. From my position I'd happily accept a 40k
| payout if you wanted to contract me to never say the word
| "are" for two years. It's all a balance - if you have
| 20million sitting in the bank then 40k sounds like
| nothing, but 40k is a lot of money to most people -
| enough that they'd be happy to agree to some terms that
| won't significantly affect their lives just to take home
| the cash and get, lets say, two months of vacation
| budgeting (a conservative estimate) to enjoy.
|
| 40k is a lot of money.
| citizenpaul wrote:
| I get what they are saying. The cost of a lawsuit for any
| perceived violation of this NDA would easily cost over
| $40k to fight or settle. If you're highly risk adverse it
| may actually make sense on a personal decision level to
| not sign any NDA's for any amount below instant
| retirement like 8 Figures. If you sign it will always be
| over your head for the rest of your life.
| corobo wrote:
| Yeah 40k would instantly solve all my immediate problems
| and set me on a path for life haha
|
| One day I hope to see giving up 40 grand as a viable
| option
| WalterBright wrote:
| I'd miss:
|
| 1. International Speak Like a Pirate Day.
|
| 2. Passing by John Cook without an R-R-R-R me hearties!
| doctor_eval wrote:
| Maybe you could have an accent.
|
| Ahhh me mateys !
| mhh__ wrote:
| What do pirates have inside their compilers?
|
| IR, matey
| gtowey wrote:
| I think the problem is that 40k is a lot of money to the
| average worker AND it's virtually nothing to the company.
|
| It highlights just how out of control the wealth gap is
| becoming.
| burnished wrote:
| So two months, assuming 40 hour work week, ends up being a
| comment with 320 hours of integrity behind it. Still very
| impressive, thanks for bringing it to the time domain.
| nightski wrote:
| I would of taken it and given it to a charity. Instead we
| get a silly HN comment.
| avg_dev wrote:
| It's hard to have integrity when you are hurting for money.
| I am glad the poster was doing okay and was able to stand
| up for what they believed in
| fakethenews2022 wrote:
| It is common practice for companies to offer this if they
| have something to hide.
| htrp wrote:
| its common practice in general to offer anywhere between
| 2-4 months for redundancies, the non-disparagment clauses
| are tacked on to make sure you can't grind an axe with
| your now ex-employer
| saiya-jin wrote:
| Did you skip basic math classes? Or your parents still pay
| all your expenses?
|
| Even with 20k bring home salary (which means you have been
| around a bit, possibly having family, mortgage, other
| investments, have hobbies that are a bit above just
| breathing air costs, teavel etc) you are saving just a
| fraction of that.
|
| For some the fraction is really tiny, for some a bit
| bigger. 40k (just raw amount, untaxed) can be a year's
| worth of actual savings easily, even with good job.
|
| Kudos to OP, bits like this keep my faith in humanity.
| darth_avocado wrote:
| Assuming 40k is worth two months salary, the actual
| amount you'd end up getting will be closer to 20
| something. It's all about perspective. 20k is a lot for
| someone making 100k/year or less. 20k is not a lot for
| someone who works for Big Tech and pulls in a lot more.
| rhacker wrote:
| yeah that's what I was thinking. If 40k landed in my lap,
| my current income and burn rate being what it is, I would
| easily finish all my walls on my house and roof.
| _the_inflator wrote:
| I experienced the change of tides first hand. Without proper
| Senior Management support, you are essentially non existing, no
| matter how good you do.
| astrange wrote:
| I'm slightly surprised they'd just lay you off instead of
| trying to fill other internal positions first. That seems like
| it'd cost less than $40k for one thing.
| 8ytecoder wrote:
| I was laid off (with a future date), paid an additional $30k
| to stay till that date and then hired back with an additional
| $30k bump in salary. Companies do all sort of stupid stuff.
| commandlinefan wrote:
| > fill other internal positions first
|
| Not (necessarily) getting down on OP, but the sort of person
| who's attracted to anything that Twitter would call "anti-
| abuse" might not be the sort of person who's appropriate in
| any other role.
| kwertyoowiyop wrote:
| Just being in that group might actually be a scarlet letter
| for other Twitter hiring managers.
| commandlinefan wrote:
| > other Twitter hiring managers
|
| Or any other hiring managers anywhere else - I'd be
| really hesitant.
| astrange wrote:
| Banning spambots is an essential part of running social
| media, it's not a secret team of SJWs. And they're
| currently in a lawsuit since Elon claims they don't
| actually do it and just pretend to.
| commandlinefan wrote:
| > Banning spambots
|
| OP was pretty explicit that his job was not banning
| spambots, but "protecting marginalized users who face a
| lot of abuse".
| astrange wrote:
| He said that's what he cared about, but the comment was
| about the "group" and the whole group probably does both.
|
| Also, Twitter ships anti-abuse features all the time so
| they seem to care about it. Although they're not
| especially strong, like the one that asks you to
| reconsider if your tweet has too many swear words.
| wpietri wrote:
| No, that's exactly correct. I joined Twitter to solve a
| problem. A problem that my boss's boss, as well as plenty
| of other people, did not give two shits about solving. He
| certainly did not want me under him, and it was far easier
| for him to engineer a "layoff" than try to transfer me
| somewhere else.
|
| And I get the Machiavellian calculus. I cared about Twitter
| and its users, especially the marginalized ones that were
| facing a lot of abuse. He cared about his own career, and
| wanted people who would put his advancement first. I would
| not have done that, so pushing out me and anybody else who
| really cared about the problem was the correct move in his
| self-centered analysis.
| mkmk wrote:
| From a feature/capability perspective, what would be the
| most impactful ways for Twitter to help those users?
| orange_joe wrote:
| Imo Ban images of tweets. It's what allows the pile on to
| live forever.
| wpietri wrote:
| Five years later, I have no idea. Answering that requires
| a lot of data that isn't available publicly. For me it
| would start with who's getting the abuse these days, and
| where it's coming from.
|
| But if I had to guess, top of my list would be faster,
| more targeted action to deplatform shitheels. For a while
| last year I was tracking a ring of nazi-ish jerks who
| managed to stay around eternally by getting new accounts.
| They'd either circulate the new account before the old
| one got banned or come back, follow a few people in the
| account cluster, and then say the were looking for their
| "frens", which would prompt a wave of retweets in the
| cluster so as to rebuild their follow graph.
|
| I'd also look for ways to interfere with off-Twitter
| organization of harassment of particular accounts, which
| has some characteristic signals. Interfere both by
| defanging the content (attenuated notifications,
| downranking) and faster suspension/banning for jerks that
| are part of the waves.
|
| I'd also love to see more support on the recipient side,
| but I haven't kept up with what those tools look like
| these days.
|
| That said, I believe they've made progress since I left;
| I think twitter has a notably lower proportion of shitty
| behavior than years ago. So kudos to whoever's been
| working on this problem across the reorgs that have
| happened since I left.
| [deleted]
| epups wrote:
| Sounds like pretty standard big corp stuff. I honestly don't
| understand why a NDA is even required here.
| bagels wrote:
| They probably don't want bad PR from everyone knowing that
| they laid off the team that is responsible for fighting abuse
| on the platform, given that it is still a problem that the
| public may be concerned about.
| BbzzbB wrote:
| The featured article being #2 on HN says something.
| ncr100 wrote:
| Is there a catchy term for this pattern?
|
| - "Pump and dump."
|
| - "Savior to villain."
|
| I see this pattern frequently - with corporations' power and
| influence, their ethics go checked only by law and public
| sentiment..
| TremendousJudge wrote:
| I think that the mistake is thinking that corporations can
| have ethics in the first place. A person can have ethical and
| moral codes, a corporation can't. Execs can write as many
| documents as they want, a corporation is not a person and
| will never be able to have the same consistency that we'd
| expect from a normal person with morals.
| wpietri wrote:
| Corporations don't have brains, but they do have cultures.
| Those cultures include ethics, just ones that often differ
| from what outside individuals would want. Or put more
| simply, "fuck the rich" and "fuck the poor" are both moral
| codes.
| jiveturkey wrote:
| such payouts can be negotiated. I would have asked for $400k.
| (then still not signed it!)
| AlbertCory wrote:
| > slowed its hiring amid rumors of potential layoffs
|
| From a Silicon Valley veteran: if you work there and read this,
| and you don't already have your resume circulating, it's too
| late. You want to be out there _before_ all the other Meta
| employees.
| chrisseaton wrote:
| Anyone who's been at Meta will be able to walk into any high
| paying job they want. I don't think it's an issue.
| AlbertCory wrote:
| Not true. They'll be able to walk into _some_ high paying
| job. Just maybe not the one they wanted the most, because
| some other Meta dev got it.
| drstewart wrote:
| How is "too late" when there haven't been any layoffs? Are you
| claiming there's a mass exodus ongoing from Meta right now?
| azemetre wrote:
| The idea is if they fired 2,000 devs (for example), suddenly
| you have to compete with 2,000 people (who are all likely
| good devs) all immediately looking for a job.
|
| This may not have been a problem in 2018, but it's definitely
| a problem in today's current environment with large amounts
| of bay area companies slowing their hiring.
|
| If you look for a job before all this, you're in a much
| better position at least.
|
| I wasn't around during the dot com era, but there were
| stories of devs ending up taking any job (programming or not)
| to make ends meet.
| cush wrote:
| There are currently over a million open software
| engineering roles in the United States alone
| whydoineedone wrote:
| whimsicalism wrote:
| The market has slowed but I still think engineers from
| firms like Meta are not really facing a tough job market.
| It's just much harder now to get your foot in, to join
| startups, or to get hired from a less prestigious company.
| nomel wrote:
| > Meta are not really facing a tough job market.
|
| The word "facing" is present tense, which is not the
| appropriate context for this scenario. _Everyone_ will
| have trouble if Meta fires a bunch of engineers. People
| from Meta may have a little less trouble, and some
| companies will even open up new reqs to absorb some for
| cheaper. But there are a limited number of open positions
| out there. No real imagination or speculation is required
| here, since this has all happened many times before.
| programmarchy wrote:
| If you wait until the layoffs happen, you'll be competing in
| a much larger pool.
| antipurist wrote:
| They're claiming that by the time the layoffs start,
| recruiters' inboxes will already be saturated with exMeta
| CVs.
| [deleted]
| __s wrote:
| This is a thread about cuts being made. He's implying there
| will be more cuts. If you aren't circulating your resume
| today, you probably won't be tomorrow. Don't wait for the
| cuts to get to you before you start circulating
|
| Unfortunately these times can be a bit rough to navigate. It
| can be tempting to cling to one's current position rather
| than move to a new company & be first on the chopping block
| hedora wrote:
| My experience is that having The Register run an article of
| the form "company $X cuts growth targets; cuts
| $Crown_Jewels team" is worth 1000x more than circulating a
| resume.
| AlbertCory wrote:
| At least four of the responses to this got the point.
|
| By the time layoffs or a mass exodus actually happen, you're
| competing with 100s of other good devs for the best jobs (not
| that you won't find _some_ job).
| mikedouglas wrote:
| From another Silicon Valley veteran: this was a horizontal team
| that lost its exec sponsor and so didn't have a clear way to
| make impact. This kind of thing happens all the time at
| companies and panicking is uncalled for.
| aussieguy1234 wrote:
| Perhaps someone or a group of people should put up the money to
| keep this team going outside and independent of Facebook, maybe
| as some kind of "Responsible Social Media" non profit entity. It
| sounds like important work and without Facebook or some other for
| profit employer being able to control what they say, imagine the
| insights the rest of the community will be able to get.
|
| Something similar happened in Australia when an old climate
| change denying government dispanded the "Climate Commission" (a
| government department tasked with investigating climate change)
| to save face. The employees of that particular department packed
| up and left their office, got funding and now they're known as
| the non profit Climate Council. They have remained a thorn in the
| government's side ever since and are still going almost a decade
| later https://en.m.wikipedia.org/wiki/Climate_Commission
| 3qz wrote:
| > Facebook dating team's decision to avoid including a filter
| that would let users target or exclude potential love interests
| of a particular race
|
| Lots of minorities actually love this feature because they can
| find people from their own community. If it were up to teams like
| this, they would remove the gender filter as well. I'm glad these
| crazy Twitter people were fired.
| alephxyz wrote:
| What they should've done is add a second filter to avoid
| getting matched up with people who use the race filter.
| altruios wrote:
| ...Or maybe facebook dating is a little trite and shouldn't
| even be a thing?
|
| You get decisions like this after profiling use cases -
| targeting by race might be vector for targeting minorities for
| hate crimes.
|
| Community/social sites (in my opinion) should not double as
| dating sites - for safety reasons.
|
| The reason we can't have nice things is because of bad actors:
| all design needs to account for bad actors.
|
| Assholes ruin design: Don't shoot the messenger - or stop
| taking their calls when your own team tell you they exist.
| BryLuc wrote:
| worker767424 wrote:
| Has any one seen teams like this actually work? They seem to
| mostly exist for PR purposes.
| koheripbal wrote:
| PR is the intended purpose.
| worker767424 wrote:
| But do the teams know that? I love picking on Timnit Gebru
| and her time at Google. She clearly didn't know what her
| actual job was, which is funny because she seems to be good
| at marketing herself. You think she'd know Google was using
| her for marketing.
| koheripbal wrote:
| I believe they do. That's why groups like this love the
| word "optics".
| ThinkBeat wrote:
| Are Virtue signaling jobs the first to go in this recession?
| einpoklum wrote:
| > Group was put in place to address potential downsides of the
| company's products
|
| Oh, you mean like how Meta has access to your data, sells it, and
| also sends some/all of it to government intelligence agencies?
|
| Or how they manipulate, censor and promote content, shaping
| public opinion (of Facebook users anyway)?
|
| Or how Facebook is addictive and exacerbates anxiety in many
| people?
|
| Those kinds of downsides? Gee, I wonder could possibly happen to
| such a group.
| pclmulqdq wrote:
| They don't want anyone quantifying just how irresponsible their
| "innovations" were.
| kujin88 wrote:
| When did Ethics and Facebook ever come in the same line, lol!
| seydor wrote:
| It was not responsible or not innovative?
| kache_ wrote:
| Simple enough to trust employees to be conscientious and good
|
| Most engineers I know are responsible and ethical, and have
| strong moral compasses.
| cassac wrote:
| Obviously if Google no longer needs to "not be evil" Meta would
| be required to do "irresponsible innovation" just to stay
| competitive. So the team has to go.
| hsnewman wrote:
| Stop using that propaganda outlet.
| scifibestfi wrote:
| Brilliant comment because this could mean Facebook or Engadget,
| and either way it fits.
| tomcam wrote:
| Very good point. What is your recommended Facebook alternative?
| dangerwill wrote:
| The OP may be referring to linking to the WSJ
| DesiLurker wrote:
| WSJ very well may be propaganda platform but I think of it
| as a dog-whistle channel from true owners of our society
| (.01%) to the wannabes (1%). And its good to see whats
| being signaled there.
| supahfly_remix wrote:
| How would the WSJ journalists (who are probably in the
| 20%) know what to write? Is the dog whistle message
| conveyed from the owners to the editors to the
| journalists?
| DesiLurker wrote:
| Most likely that, the message would be encoded in the
| paper's policy and the knowledge of what type of stories
| would be acceptable for publication. as somebody once
| said, most journalistic censorship is self-censorship.
|
| the fact that journalist himself is in 20% is irrelevant
| as its his bosses who finally decide what gets to
| your/mine eyeballs.
| DesiLurker wrote:
| I like ello (ello.co). I also prefer to use Telegram & dont
| have FB installed on any of my devices.
| hedora wrote:
| Nginx
|
| Edit: If I'm honest: S3.
| mattwest wrote:
| Responsible innovation implies creating value. Why bother if you
| make more money by extracting value from your own users.
| numair wrote:
| I think these sort of "Responsible Innovation Teams" and whatever
| else they are called are double-evil for various reasons:
|
| 1) They are literally created by these companies as a means for
| deflecting attention from the real problems that are occurring at
| a higher / hidden level. For example, these people have no idea
| that Facebook has actively lobbied to prevent countries from
| enacting a digital tax that would more fairly spread the tax
| revenue associated with profits generated in countries around the
| world. There's nothing these people would ever do about that in
| their "work."
|
| 2) The people who take these jobs are generally quite terrible.
| They are willing to accept a paycheck from a company whose ethics
| and tactics they supposedly find reprehensible. If we take their
| jobs at face value, they are paid to show up every day and hate
| what they see and criticize it. It takes a certain type of toxic
| mental state to find that appealing, regardless of how much money
| you're making for doing it. And, if you're motivated by money to
| take such a job, well... What does that say about you?
|
| 3) These teams spread the false belief that these companies are
| so big and powerful that you have to try to "fix it from within,"
| rather than doing what we have done _so_ well for so long in the
| tech industry: we 've burned these empires to the ground and
| started all over, with something better/faster/[eventually]
| bigger.
|
| 4) People within the organization who know that these teams exist
| for regulatory / PR purposes think that they've been given cover
| to continue to act in shady and unethical ways. These sorts of
| groups perversely make it easier for the bad actors within these
| organizations to continue acting bad, and sometimes even act
| worse. Facebook and its current/former executives are the gold
| standard for this: the more you see "regrets," "internal
| dissent," etc, the more utterly depraved and shameless the
| behavior behind the scenes is.
|
| In summary: Bad people working for bad companies that do bad
| things. Pointless jobs at best, a net negative at worst. These
| are toxic jobs that attract toxic people.
|
| Thankfully, none of this really matters, because Meta/Facebook is
| slowly disintegrating and there's nothing Zuckerberg or anyone
| else can do about it.
|
| (And before someone posts the inevitable "2 billion active users"
| response, do remember that these are network effect businesses in
| which the network effects need to be constantly replenished with
| new and exciting reasons to stay connected. If people are more
| excited to be connected somewhere else, you're dying. And, the
| sort of people who invent exciting new reasons to be connected
| don't work at evil corporations that silo "responsible
| innovation" off from the "real work.")
| lacker wrote:
| Facebook used to be organized at a high level into different
| groups named after different company goals. Like Engagement,
| Growth, Utility. Facebook should be engaging, Facebook should
| grow, Facebook should be useful to people. Eventually they got
| rid of the "Utility" group while keeping "Engagement" and
| "Growth", leaving a bunch of us feeling like... I guess Facebook
| gave up on being useful?
| mupuff1234 wrote:
| The irresponsible innovation team is still hiring!
| r4vik wrote:
| which team would you rather work on anyway?
| m0llusk wrote:
| React is doing fairly well. The new hooks stuff is very
| flexible.
| fknorangesite wrote:
| That's the rest of the teams.
| stephc_int13 wrote:
| Meta is on the same path that Nokia and Kodak once followed.
|
| They will survive, but they will bleed and shrink a lot, and
| their relevance will likely be insignificant ten years from now.
|
| The Metaverse bet is dumb, they simply can't execute and even if
| they could this whole Metaverse thing is likely to stay in the
| videogame's realm for a very long time.
| vhold wrote:
| I think VR needs pretty badly to pivot in general, it's an
| amazing technology that tries to replace phones/computer
| interfaces where it should be working with them.
|
| I don't want to strap a thing to my face and "be in VR" for a
| long period of time, navigating menus by pressing giant virtual
| buttons or pointing lasers at them. I want to, while using
| google maps on a phone or computer, be able to hold the headset
| up to my face and look around briefly and then take it back
| off. Same with data visualizations, 3D models, short-form
| entertainment, etc.
|
| Standing in VR looking at loading screens is an _awful_
| experience. It should already be loaded to what I want to do.
|
| It's hard to imagine Meta making that pivot since they seem to
| want to have a walled garden like Apple's. But they could at
| the very least make navigating/loading by a phone app an
| option. Even track the phone in VR, have a mirrored twin of it
| (with a kind of blown-up holographic version of it you can more
| easily read), that you can use in and outside of VR, and
| instead of their controllers.
| codalan wrote:
| I don't think it's totally dumb.
|
| There's a lot of IP and patent rights to be gained. Even if
| Metabook shrinks back, they will always have a possible revenue
| stream in enforcing those rights and collecting royalties on
| newcomers to the VR market.
| whimsicalism wrote:
| The Metaverse bet is dumb, but I wonder at the ten years.. Both
| Whatsapp and IG seem relatively sticky.
| kemiller wrote:
| I had a front row seat to Kodak's demise, and I can tell you
| that the depth of their denial internally was far, far greater
| than Meta's today. Not to say that Meta isn't in trouble, but I
| wouldn't count them out just yet. They are at least trying to
| keep up. Kodak only embraced the digital world grudgingly, and
| they sabotaged all their most promising initiatives because
| they threatened the (incredibly) lucrative film gravy train
| that they couldn't bring themselves to accept was ending. I'm
| sure all the decision makers who presided over that downfall
| took comfortable early retirement offers and didn't suffer for
| their mistakes.
| realusername wrote:
| > The Metaverse bet is dumb
|
| I see it as an attempt to keep propping up the stock value, I
| don't even think they believe it themselves
| akomtu wrote:
| FB wont be the one to make it happen, but I can see how a
| "minecraft with stable diffusion" will be all the rage in ten
| years.
| strix_varius wrote:
| They definitely believe in it.
|
| Talk to anyone you know who works at Meta - it's obviously
| the top thing on Zuck's mind (and that's been clear every
| Thursday for years now). FRL is getting huge funding in a way
| it wouldn't if this were just to prop value.
|
| (personally, I admire that - FB has pushed VR technology
| forward a couple of generations, singlehandedly)
| fuckHNtho wrote:
| yes, this is the obvious explanation and i dont get why
| public discussion can't make any progress but insists on
| throwing out the same "i dont think my mom wants to live
| inside minecraft yet" weak take every chance it gets.
|
| edit: it coincided with fb reporting saturated user numbers
| stephc_int13 wrote:
| Well, if this is a PR op this is the dumbest and more cost-
| inefficient one you can imagine. We're talking 10B a year.
|
| 10 _Billions_ a year.
| realusername wrote:
| I has no idea it was this costly, that's crazy ... Yeah
| maybe I'm wrong and they really seriously bet on this weird
| project
| marcosdumay wrote:
| How much of that is just relabeled activities that they
| would do anyway? Relabeling things is almost free, and
| leads to very high numbers.
| tmpz22 wrote:
| Im just dumbfounded that they would take a burgeoning
| technology like VR and set the impossible goal that it would
| one day supplant the absurd profits of their ad business.
|
| Thats like trying to tag your kid as a Heisman trophy winner as
| the kid is being birthed. Sure be a proud parent but understand
| you are delusional.
|
| Facebook is a company with nearly two decades of big tech
| experience, 40,000+ employees, and unfathomable amounts of
| capitol/IP/assets. To see them make the same logical leap as a
| two person startup of fresh-out-of-college optimists.
| duskwuff wrote:
| Someone I know recently got contacted by a recruiter for
| Meta's VR team, with the pitch that the team "wants to get
| 1Bn [users] by end of year!".
|
| That'd amount to converting 30% of all Facebook users to VR
| users, worldwide, in four months. Including a bunch of users
| internationally who don't even own computers, and certainly
| can't afford VR hardware. "Delusional" is absolutely the word
| for it.
| [deleted]
| JumpCrisscross wrote:
| > _set the impossible goal that it would one day supplant the
| absurd profits of their ad business_
|
| Who says they're replacing ads with VR?
| jayd16 wrote:
| Supposedly Apple is dropping a headset in the next year.
| Should be interesting to see what people think of the
| industry after that.
| stephc_int13 wrote:
| What are your expectations about Apple in this regard?
|
| I am curious, but I have low expectations.
| jayd16 wrote:
| Honestly, no idea. I'm just interested to see the shoe
| drop.
| tmpz22 wrote:
| I think VR is as much about GAMES as it is hardware. I
| don't think anyone, even Apple, can nail the hardware but
| they might. But I know FOR SURE Apple won't nail the games.
| Look at Amazon's attempts (lol).
| jayd16 wrote:
| Yeah that's my assumption as well but who knows. They're
| getting much better at content.
| smaryjerry wrote:
| Meta has already screwed up Facebook from a UI perspective
| which is 2D. It feels clunky and ugly in my opinion, and was
| better designed when it originally came out. Now they seem to
| be on the same path with their Metaverse apps - low quality Wii
| like graphics that feel 20 years old already.
| jayd16 wrote:
| Why do you think they can't execute? What is execution in your
| mind and why is it out of reach?
| stephc_int13 wrote:
| First, company culture, on the technical side, a team full of
| fat, overpaid cats surrounded by the worst kind of
| bureaucracy, most of them spent most of their careers doing
| web stuff, what they achieved so far is underwhelming, to say
| the least.
|
| Second, Meta is a huge corporation, the money makers don't
| let the kids play for too long, and they won't let Zuck burn
| all the cash, no matter how much theoretical power he has in
| his hands, they won't let him bleed the cash cow to death.
|
| Third, we have enough History to see a pattern, Nokia and
| Kodak saw the iceberg, it didn't matter.
|
| And small things like Carmack leaving and Zuck being
| delusional are clear hints of what is going on internally.
| spywaregorilla wrote:
| Execute to me would be providing the user experience they're
| showing in their own videos, many of which are simply not
| possible with headset technology.
|
| https://www.youtube.com/watch?v=SAL2JZxpoGY
|
| Just a ton of little things. Woman floating horizontally in
| space. Possible to render. Not possible to convey to the
| player. Cards, basically impossible to actually have cards
| that have good physics in a networked game like that. The
| smooth floaty motion and spinning is particularly dumb.
| derefr wrote:
| > and even if they could this whole Metaverse thing is likely
| to stay in the videogame's realm for a very long time
|
| There are already concerts with paid tickets going on in
| platforms like Fortnite, that _just happen to be able to_ act
| as multi-user VR environments, even though that wasn 't their
| designed purpose. The demand is real.
|
| I don't see it as much of a bet that "people are already
| creating these events, and want to build professional services
| around doing so; these people would appreciate a platform to
| run such events on that won't be shut down/go unmaintained in
| two years just because the game that the platform was built for
| ceases to be relevant; and people who pay to attend these
| events are willing to download a client to consume the content
| they paid for." That just seems to be a series of common-sense
| facts to me.
|
| Whether Facebook can actually end up as the _platform of
| choice_ for hosting said events is entirely non-obvious. But,
| if nothing else, they do have connections, channel partners,
| global scale to deploy reliable infrastructure that can be
| trusted to not buckle under event load, etc. It 's up in the
| air how much things like "the aesthetics of the experience"
| really matter, compared to those. How much does a band care
| about the aesthetics of the venue?
| paganel wrote:
| You're confirming OP's point, this is a video-game thingie. I
| personally don't know anyone who plays or cares about
| Fortnite, I'm a man in my early 40s with little to no friends
| who work in the IT industry.
| hedora wrote:
| Second Life has been doing this for over a decade. I see no
| reason to expect Meta to do a better job, and see every
| reason for people to assume they've done a maliciously-bad
| job, given their reputation.
| hbn wrote:
| It'll be worse than Second Life because they'll censor
| anything that isn't brand-friendly to Facebook (and there
| is certainly a lot of stuff like that on SL - it's a lot of
| the reason people play it!)
| stephc_int13 wrote:
| Would you trust Meta with a new social platform? Do you think
| your friends will?
|
| If they can execute (I believe they can't) and if the
| Metaverse is the next huge thing that some are predicting (I
| think their vision is too dependent on VR/AR magically
| becoming practical)
|
| They still have a huge reputation issue; they are already
| bleeding users on FB and Insta.
|
| They need a miracle.
| BbzzbB wrote:
| Have privacy geeks who keep saying "no one would trust FB
| with X" ever looked at the numbers? I keep seeing this
| sentiment around Meta and VR, yet guess which VR company
| has like >70% market share.
|
| https://www.statista.com/statistics/1222146/xr-headset-
| shipm...
| stephc_int13 wrote:
| I'd like to see numbers.
|
| I mean, engagement, not market share, total time spent by
| users with the headset on.
|
| My intuition is that 90% of that "market share" is
| collecting dust.
| BbzzbB wrote:
| Sounds like you just believe whatever you want then, no?
| How is 80% of sales in Q4 not a pretty good indicator
| that people do, in fact, "trust Meta with a new X" (or at
| least don't care enough)?
|
| As for that new intuition, what good reason is there to
| believe Oculus headsets are so vastly underused after-
| purchase compared to competitors? And what has that to do
| with Meta's reputational issues?
| MichaelCollins wrote:
| > _How is 80% of sales in Q4 not a pretty good indicator
| that people do_
|
| Because everybody I know who's bought a VR headset (any
| brand) lets it collect dust after a month, and I don't
| think that's a fluke. _Buying_ a VR headset is not
| _using_ a VR headset.
| BbzzbB wrote:
| My point was with regards to the common insinuation that
| people (i.e. consumers) will not be using Facebook
| products because of the company's reputation issues,
| which I think has never showed up in the underlying
| numbers. Through all the scandals whether it's political
| weaponization by users and advertisers, privacy issues or
| else, their userbases never stopped growing ever larger
| whether it's FB+Messenger, Instagram, WhatsApp and now
| VR. There was a single (recovered) QoQ decline in global
| active users in it's history, seemingly unrelated to any
| scandal.
|
| The company sure has reputational issues, but as much as
| people hate it (I have my own deep issues with it), it's
| not stopping the larger masses of people from using their
| products. In fact, >85% of humans (ex-China) with an
| Internet connection to their apps and >70% of VR headset
| purchasers.
|
| Whether VR is a fad, people who buy headset find long-
| lived utility from them or the push is a good/bad bet for
| $META is unrelated to what I meant to say, which is their
| reputation is not stopping them from currently dominating
| the VR space (non-premium only so far, but maybe with
| Cambria). Doesn't seem reasonable to think the 7-8 digits
| of current VR enthusiasts are less morally principled
| then the 9-10 figure masses would be.
| simiones wrote:
| > There are already concerts with paid tickets going on in
| platforms like Fortnite, that just happen to be able to act
| as multi-user VR environments, even though that wasn't their
| designed purpose. The demand is real.
|
| You have that the other way around. Fortnite was able to host
| a concert because people like to play Fortnite. There is no
| demand for VR concerts, there is perhaps some demand for
| concerts in already-populated online spaces.
|
| People aren't flocking to Fortnite because of the concerts.
| People who play Fortnite are flocking to these concerts when
| they happen in their game.
| strix_varius wrote:
| > There is no demand for VR concerts, there is perhaps some
| demand for concerts in already-populated online spaces.
|
| Megan Thee Stallion, Billie Eilish, and the Foo Fighters
| are all releasing VR concerts this year.
|
| My wife, who has never played Fortnite, attended several VR
| concerts over the past few years.
|
| Given how popular listening to music is on services like
| Youtube, I believe it is short-sighted to fail to see the
| appeal of an immersive 3D-audio VR experience for music
| fans.
| JeremyNT wrote:
| Foo fighters also released 29 tracks for Rock Band, which
| was really popular for a couple of years.
|
| New toys are fun to play with, for a bit. That doesn't
| necessarily mean they're going to stick around or have
| much long term relevance.
| pessimizer wrote:
| > There are already concerts with paid tickets going on in
| platforms like Fortnite, that just happen to be able to act
| as multi-user VR environments, even though that wasn't their
| designed purpose. The demand is real.
|
| There's no demand to watch computer animated concerts through
| heavy glasses in Fortnite coming from people who don't
| already spend a bunch of time playing Fortnite. This is an
| add-on to Fortnite, not anything that anybody really wants to
| do for its own sake.
|
| What's the point of watching a computer animated "concert" in
| VR anyway? Who would get any joy out of that?
| Karunamon wrote:
| Ask the 12.3 million who watched Travis Scott have a
| concert in Fortnite [1]
|
| There is no way to convey this without being a little bit
| of a jerk, but I think your take demonstrates being out of
| touch, in the "no wireless, less space than a nomad, lame,
| also Dropbox is just some tooling around rsync" sense that
| HN tends to get sometimes. This is enough of a phenomenon
| that other mainstream artists are starting to get into it,
| which means there are enough people getting joy out of it
| for it to have a market of its own.
|
| I think you ignore the metaverse (the concept, not
| Facebook's janky implementation) at your own peril. The
| next generation is growing up with virtual concerts, and in
| many cases virtual artists.
|
| [1] https://variety.com/2020/digital/news/travis-scott-
| fortnite-...
| forchune3 wrote:
| ok so a bunch of ppl were playing fornite in april 2020.
| that doesnt mean that people want to watch vr concerts
| breaker-kind wrote:
| i urge you to examine for any worldwide trends that were
| occurring in April 2020 that might have made people
| inclined to attend virtual events
| whydoineedone wrote:
| nfw2 wrote:
| A lot of technology has started with gaming driving much of the
| early development, including PCs, graphics, AI, and the
| internet.
|
| Whether Meta can execute on the metaverse bet is fair question,
| but I expect most of the changes they are envisioning will come
| one way or another.
| root_axis wrote:
| > _The Metaverse bet is dumb_
|
| It's a moon shot, but I wouldn't call it dumb, it's actually a
| pretty obvious business decision. Keep in mind, Oculus (now
| being rebranded as Meta) is currently the preeminent VR
| platform and has sold more devices than every other competitor
| combined. This is a potentially huge opportunity for them to
| expand their social media reach into a completely pristine
| market _that they already dominate_. Meta is in a unique
| opportunity to define the future of VR, it 'd be dumb _not_ to
| try something like this.
|
| Of course, none of this addresses the abysmal branding or
| Meta's horrible reputation or the public's perception of Zuck
| as the herald of a technocratic dystopia. I also agree that VR
| will indefinitely remain within the realm of videogames, and
| gaming is not in Meta's DNA which I think will probably be
| their biggest barrier to success. Still... they own the VR
| market, they also have plenty of cash on hand, and that goes a
| very long way when trying to make the impossible possible, e.g.
| the xbox...
| codalan wrote:
| At the end of the day, they will have IP rights for a lot of
| VR related technology, and that will be worth something.
| stephc_int13 wrote:
| Microsoft had DirectX and a decent kernel before building the
| XBox. The hardware part was much easier, and also an easy
| target (do better than Sony) and they had a slow start.
|
| Also, MS had quite a bit of experience in
| building/maintaining a platform for developers.
|
| Frankly, we're not talking about the same kind of leap.
| lazyfanatic wrote:
| I feel like that team spent a lot of time at the water cooler.
| ParksNet wrote:
| faitswulff wrote:
| It's like a team in a wildfire tasked with discovering
| potential harms to the forest.
| scifibestfi wrote:
| lokar wrote:
| Zero. They don't use slack
| carlivar wrote:
| What do they use?
| aeyes wrote:
| Workplace (Facebook for companies)
| ptudan wrote:
| Messenger lol
| whimsicalism wrote:
| f
| mardifoufs wrote:
| For team communication and org-wide messaging? How?
| That's insane if true
| scifibestfi wrote:
| Touche
| ulkram wrote:
| get rid of all the Ethical AI folks too. in theory, it might be
| useful, but in practice, it's just bureaucracy
| bongoman37 wrote:
| zxienin wrote:
| This vaguely reminds me of Silicon Valley season 3 episode, where
| Hooli CEO throws entire Nucleus team under the bus, to ,,take
| responsibility".
|
| Cracks me up, just how close SV (series) is to out there..
| Victerius wrote:
| Devil's advocate: Private firms shouldn't be in the business of
| evaluating the harm they cause to society. We the people, and the
| enforcement arm of the people, the government, should pass
| legislation and enforce laws to prevent such harms. If the
| people, i.e. the government, is unwilling to perform this basic
| function, private firms shouldn't feel obligated to take the
| mantle of responsibility.
| bhhaskin wrote:
| I would say there should be checks and balances. Of course
| companies should be looking at what is harmful or not, but they
| should't be the only ones. It's all about checks and balances.
| lisper wrote:
| That sounds great in theory but there is a major problem: in
| the U.S. at least, corporations are political actors. So you
| can't fob off responsibility from corporations onto "the
| people". In the U.S., corporations literally _are_ the people.
| PuppyTailWags wrote:
| > Private firms shouldn't be in the business of evaluating the
| harm they cause to society
|
| Private firms have been evaluating the harm they cause since
| tobacco. They need to know precisely what they need to cover up
| under the guise of discovering societal dangers for social
| good.
| pessimizer wrote:
| Nah. Tobacco companies didn't give a shit about societal
| dangers. They were trying to deflect liability, which is a
| business decision. It's a business decision that they're
| forced to make because we have _laws_ about _liability_ and
| don 't rely on the generosity of CEOs.
| PuppyTailWags wrote:
| I literally said that private companies have been using
| studies to deflect liability _under the guise of social
| good_.
| [deleted]
| tasty_freeze wrote:
| Imagine there is a manufacturing plant and they have a waste
| stream of highly toxic chemicals. The cheapest solution is to
| pump it into the ground, and if in a decade or two later, those
| chemicals make their way into the water table it is someone
| else's problem. For now, management has succeeded in producing
| higher profits. That scenario has happened in hundreds of sites
| around the US, including in Silicon Valley, and billions have
| been spent by the government to clean it up (imperfectly).
|
| Under your proposal, we the people and the laws would need to
| anticipate any harm a company might do, otherwise they are off
| the hook. Sure, there are laws now against semiconductor
| companies pushing their chemicals into the ground water, but
| what future chemical might be in use that isn't named in some
| law currently?
|
| A second point is that of regulatory capture. If a company can
| spend $100M lobbying politicians, courting regulators, spending
| on PR to sway public opinion to believe false things, and as a
| result earn $1B in extra profits, that would be AOK under your
| plan.
|
| Every time someone trots out the false claim that it is a CEOs
| legal responsibility to maximize profits, I have to wonder
| about their ethical framework, as if maximizing profits is an
| inherent good.
| pessimizer wrote:
| > Every time someone trots out the false claim that it is a
| CEOs legal responsibility to maximize profits, I have to
| wonder about their ethical framework, as if maximizing
| profits is an inherent good.
|
| The problem is that you're talking about how good people are,
| when other people are talking about how well things work, and
| about figuring out how to make them work better. Your ethical
| framework seems to be that people should be good and do good
| things, but with no reference to what things are good and
| what things aren't other than the intuition and improvisation
| of good people, and no reference to what we do if they don't
| do what we think are good things.
|
| If you're relying on the CEO to be a good person, you've
| dismissed governance as a possibility and anointed kings. In
| general, I don't think that kings work in my best interest,
| so I prefer regulations.
| tasty_freeze wrote:
| I'm not saying that we should only rely on CEOs to do the
| right thing. We absolutely still need regulation, and
| mechanisms for fighting regulatory capture. But the
| original claim was that companies have no responsibility
| towards ethical behavior; I'm saying we need to strengthen
| the moral responsibility of CEOs, not weaken it. Sure, it
| isn't perfect, but it's better than "CEOs are right to do
| whatever they can get away with."
| tedivm wrote:
| Private firms and businesses are run by people. What you're
| essentially saying is that people should not care about how
| their actions harm society as long as it makes them a profit.
|
| "If society didn't want me to kill all of those people why
| didn't they stop me?" is not really a good look. Abdicating all
| of your personal responsibility to "society" is just an excuse
| to be a sociopath.
| whakim wrote:
| If private firms are only beholden to the profits of their
| shareholders (i.e., the way modern capitalism is mostly
| structured), I agree with you and I think that's what we're
| seeing here: Facebook dissolving this team because they believe
| the costs (to them) outweigh the benefits (to them). That being
| said, I think this entire structure is arguable: why are
| private firms only beholden to shareholders when they're built
| on the back of public resources: infrastructure, education,
| talent, laws, knowledge, etc.
| AlbertCory wrote:
| You are right, but for a different reason than you think:
|
| Whenever there's a lawsuit against Meta, the plaintiff can do
| discovery and demand all this stuff. So the only defense is not
| to have it.
|
| "Meta top management was made aware of all these harms, but did
| nothing about them." -- that's not what their lawyers want to
| hear.
| foobarian wrote:
| I wonder if that team got created for PR purposes during
| those privacy debacles a bunch of years ago.
|
| Bonus benefit from disbanding the team now: if more shit
| happens, can re-create the team as a show of action without
| actually changing anything.
| AlbertCory wrote:
| I like the way you think. You'll go far in top management.
| usrusr wrote:
| The first parts are certainly true, but private firms certainly
| did not "feel obligated to take" that mantle: they saw a chance
| to grab it and run.
|
| Meta dissolving that team can mean two things, either that they
| lost hope to get away with it, or that they are confident that
| they will get away with anything, without even pretending.
| forgetfreeman wrote:
| I feel like this stance ignores regulatory capture as a
| concept. In any activity at the scale of individual humans it
| is generally considered inappropriate to adopt an attitude of
| "I'ma do tf I feel until someone tells me I'm fucking up". Why
| should activities at the scale of private firms differ?
| jhgb wrote:
| Wait, so the response to regulatory capture by companies
| should be...regulation by companies? Like, just removing the
| middleman?
| forgetfreeman wrote:
| Never said that, but since we've apparently landed at
| "people are complete sociopaths and absolutely _must_ be
| forced to act in ways that aren 't overtly harmful" what,
| specifically, are your recommendations? Because from where
| I'm sitting that sounds an awful lot like ironclad
| reasoning to eliminate profit motive from society.
| jhgb wrote:
| I'm not recommending anything. I'm just saying that "we
| shouldn't rely on companies for regulation of their
| behavior" is not an opinion easily countered by "but we
| need to, just in case of regulatory capture", because
| it's putting the cart before the horse.
| culi wrote:
| Corporations have outsized influence on our government. While
| the working class is too busy working, these corporations are
| able to hire full-time lobbyists whose entire full-time job is
| to advocate for them.
|
| Though we definitely still have miles to go to limit the
| influence of corporations on gov't (and even public opinion by
| limiting media centralization and funding independent media), I
| doubt we'll ever be able to fully limit their influence.
| rewgs wrote:
| This is woefully unrealistic.
| hackerlight wrote:
| > If the people, i.e. the government
|
| Well that's one issue. The government only weakly equals the
| people, because of the electoral college disenfranchising
| voters in NY and CA, lack of ranked choice voting, Black voter
| disenfranchisement, lack of statehood for DC and Puerto Rico,
| and the corporatocracy system of influence.
|
| > If the people are unwilling to perform this basic function,
| private firms shouldn't feel obligated to take the mantle of
| responsibility.
|
| Sounds made up.
| pessimizer wrote:
| > Sounds made up.
|
| What sounds made up?
|
| Also, related to nothing, why have a bunch of people on the
| internet started replying to everything over the past few
| weeks with "sounds made up"?
| pastacacioepepe wrote:
| > the government, is unwilling to perform this basic function
|
| Which usually happens due to lobbying by said private firms
| nathanyz wrote:
| I agree, that business is like a game, and it is up to the
| government to set the rules of that game.
|
| If profit is winning, then any large enough business will do
| anything within the bounds of those rules in order to increase
| profit. You wouldn't ask basketball team to stop taking
| advantage of whatever edge they can to win would you?
| progman32 wrote:
| The issue here is the players are making the rules.
| triceratops wrote:
| In this game the players influence referee appointments.
| O__________O wrote:
| It is impossible to have an internal oversight team that's
| without a conflict of interest.
|
| From law enforcement to newspapers to government to technology --
| unless there's demand from the public and oversight is
| independently funded and managed, without fail, function of team
| will end up either functionally meaningless or aligned to the
| parent organization's core objectives.
|
| Beyond that, no venture is without flaws, and until people are
| able to acknowledge that optimal solution do not mean zero
| negative impact, it will be a race to the bottom for which
| another culture that's able to manage the complexity either by
| luck or skill will eventually replace those who are unable to do
| so.
| advisedwang wrote:
| There's a big difference between a conflict of interest with
| the company at large and a conflict between teams. For example
| at Google SRE vs SWE is set as a conflict which mirrors
| reliability vs dev-velocity trade-offs. And that works OK
| because both are valued by the company and so by being the
| decision maker on that conflict leaders get given a way to
| adjust that trade-off.
|
| In an ideal world the "responsible innovation" team would
| represent one side of a "speed and scope" vs "PR, compliance
| and political considerations". So even though it goes against
| some of Meta's goals, it would be valued for achieving others.
|
| However sadly in practice any time a set-up like this has
| "money" on one side of the balance, it's always going to win.
| So the team was set up with an impossible task.
| strix_varius wrote:
| Teams like this attract a certain type of person, and it's not a
| builder. In order to justify your own existence, you _have_ to
| invent blockers to place in front of people who are actually
| trying to build things.
| sangnoir wrote:
| > Teams like this attract a certain type of person, and it's
| not a builder.
|
| Ditto for financial audit teams and so-called "IT security":
| all they do is block, I've never had anyone in either function
| build something or help me work faster, just additional
| processes and bureaucracy that slows down _real work._
|
| edit: I thought my sarcasm would be apparent, but Poe's law
| strikes again.
| strix_varius wrote:
| I haven't worked much with financial teams, but the security
| teams I've worked with have absolutely helped build our
| products. They put practices & systems in place that made a
| golden path to getting software green-lit for production.
| commandlinefan wrote:
| > financial audit teams and so-called "IT security"
|
| They do fill a role that actually needs to be filled, though.
| browningstreet wrote:
| I'm in a highly regulated industry segment and work in an
| information security related function. You've made my day.
| sangnoir wrote:
| You're welcome!
| Karrot_Kream wrote:
| It's because security, audit, and ethics teams aren't judged
| by how much they enable others, they're judged by how many
| threats they block. I work at a well-known tech company but
| joined when they were still very small. Our original security
| "team", a team of 2 engineers, were highly plugged into the
| product and the concerns of the then-small engineering team.
| This team was always willing to help and enable builders to
| stay security conscious.
|
| As we grew into a large company and our security team became
| an entire organization at the company, the org lost all
| connection with the product and became a more classic
| blocker-based team. The org lost empathy with the product and
| was judged on no product metrics, so naturally the culture in
| the org began to just be saying "no" to engineers all the
| time. A few of the earlier hires still try to help, but for
| the most part they just block.
| eropple wrote:
| "Fast" is not the only descriptor that should be applied to
| work where a financial audit might be involved. "Correct"
| probably has a few things to say. "Legal," too.
| sofixa wrote:
| It's not their job to build, it's their job to ensure that
| what _you_ build isn 't crap on any number of fronts
| (security, compliance with regulations). You know, important
| things that impact your users (if you have the best software
| ever, but it leaks user PII in the HTML because you're a
| dickhead who wanted to build fast without any regard for
| security, that's not great).
| MattGaiser wrote:
| Sure, if they helped you figure out a way to build, that
| would be fine. Far more often, I am just told to kill
| feature/bugfix/concept. At a past job, the security people
| were against the idea of self password reset at all.
| Zircom wrote:
| The security team at my current company also refuse to
| allow self password resets. The only way is to call our
| service desk and give them your DoB and last 4 of your
| SSN... and they completely ignored me when I pointed out
| that for at least half the population the US that
| information is more or less public due to several data
| leaks.
| aaronbrethorst wrote:
| Same with QA. Fire those guys!
| strix_varius wrote:
| This, but non-ironically.
| fruit2020 wrote:
| I used to feel the same when I worked for a bank. And then I
| worked at a place where security was so unregulated that TBs
| of data and millions of user's accounts could be stolen just
| with a shared password and no one would notice
| jollyllama wrote:
| Yes. Does the GP think the same thing about security
| engineering teams? What about QA engineers?
| thatoneguytoo wrote:
| Exactly. This is the right explanation.
| duxup wrote:
| At a previous job I joked that groups like that would place to
| attract / corral employees that should be cut during the next
| round of layoffs.
|
| In theory these teams could do some good things, in reality it
| attracts or creates horrible people with INCREDIBLE efficiency
| / creates a cycle of endless meetings / recommendations... and
| it is endless because it is in their best interest to have an
| endless amount of "work" and inject themselves in a way that
| costs them nothing, and everyone else a great deal.
|
| I figured these teams were the easiest place to find people who
| provide nothing at all / get in the way of folks doing things
| and almost always they don't accomplish their goals anyhow.
| They wouldn't even know if they did accomplish their goals
| anyway as these groups tend to center everything on their
| actions / the goal is them doing things endlessly.
|
| Somehow word got back to them about the joke, they blamed the
| wrong person for the joke, tried to raise a hubbub with HR (to
| their credit HR told them to pound sand). And then they got all
| laid off ...
| gnramires wrote:
| But you need somehow oversight right? (specially for
| something as critical as social networks used by billions)
|
| I'm a believer in some sort of cross-validation and
| quantification. The ethical impact of a technology should be
| quantified, and addressed if significantly negative (or
| promoted/incentivized if positive!) -- you can quantify
| number of users impacted, give various arguments, soft
| measures and estimates. Then, you can have other teams
| validate those estimates. After all said and done, you can
| again evaluate an intervention (on both ethical terms, and on
| the productivity and profitability of your company), and see
| if it was good or not. If a team is repeatedly giving bad
| advice, I think you can then address that (retraining, lay
| off, etc.?). Ethics is not impossible to do :)
|
| I believe a certain generalization of this is needed for the
| entire society[1], to externally address good/bad
| technologies. A good example I cite all the time is Open
| source contributions. We still haven't found a good mechanism
| to pay OSS, despite the immense value it brings to society
| (openness itself has a value that direct sale couldn't
| capture). I'm a big believer in distributed evaluation and
| cross-evaluation: if we had distributed entities to evaluate
| impact of various ventures (for efficiency, properly
| categorized and subdivided into science, technology, social
| costs/benefits, environmental costs/benefits, etc.) we could
| apply the same logic to make society more adaptive,
| distributed, and less focused only on the single goal of
| profit. (I plan to elaborate this further, I think we are
| going to need to "patch" our current system in some sensible
| and careful ways to get a really good 3rd millennium!)
|
| [1] Previously sketched this here:
| https://news.ycombinator.com/item?id=28833230
| duxup wrote:
| I think the problem is that the way these groups work ends
| up being something other than intelligent oversight. It's a
| human problem.
|
| They attract or breed people who see it as a great / easy
| job simply to provide oversight but have no investment /
| zero consequences for what happens. They can give tasks,
| eat up time, it costs them nothing, but costs everyone else
| tons of time.
|
| They are also free of any consequences ... how do you know
| such a group did any good? It's entirely for them to
| define. "These products were ethical, and we made them that
| way." Easy to say.
|
| IMO these kinds of questions are up to the folks in charge
| of the products, beyond just a random group deciding "is
| this ethical" as they actually also have to understand and
| know them. If the folks making the product can't handle it,
| that's the problem.
| MrBuddyCasino wrote:
| _takes notes_
| worker767424 wrote:
| Probably best not to tell HR that there's a joke going around
| that your team should be laid off. Wouldn't want to give them
| any ideas.
| UncleMeat wrote:
| This is also true for security teams and privacy teams and
| accessibility teams. Yet they are extremely important.
| bentcorner wrote:
| Same with QA. Msft rolled QA and dev into a single role and
| (IMO) it was a detriment to the dev cycle. It's difficult to
| switch builder and blocker hats.
| tpmx wrote:
| We used to have a large QA silo at a previous job. They did
| exactly that. We broke that down that by moving the QA
| engineers into the respective product teams - and kept
| them, there.
|
| Turns out many kinds of businesses/product teams (including
| Microsoft's Windows team, I'd say, from personal user
| experience) need dedicated QA people because many
| worthwhile tests can't really be automated well enough with
| a reasonable effort, or if they can, it's something that
| often breaks and needs regular maintenance.
|
| I suspect that in the Windows case they simply stopped
| testing those troublesome cases.
| opportune wrote:
| Agreed. QA is really helpful for anything UX/UIrelated
| (where the alternative is often experimentation, which
| really is only good for changing already existing things,
| and not necessarily ideal for new things). Or if
| something is so big, complex, and crusty like Windows,
| where it'd be a Sisyphean task to go back and add
| automated testing everywhere it could theoretically be
| useful, it's a lot better as a stopgap than
| idealistically declaring you don't need QA since in
| theory you could automate things (if you were willing to
| spend 2000 person-years on it).
| tpmx wrote:
| In case you're confused about the downvote(s):
|
| https://news.ycombinator.com/item?id=32730380
| cpeterso wrote:
| Does Microsoft no longer have STEs and SDETs?
| ThrowawayB7 wrote:
| IIRC, the STE role was eliminated in 2006 or so.
|
| In 2014, for most divisions, the SDET role was rolled
| into the function of SDEs, reportedly resulting in
| drastic attrition of the converted SDETs. The exception
| was the operating systems division, which was reported to
| have laid off all of its SDETs at that time.
| munificent wrote:
| QA is an interesting one because many companies don't
| separate QA from development and it works pretty well. I
| think the main reason you can pull this off is because you
| can "do QA" by writing automated tests, so it still fits
| into the "builder" style of work that engineers prefer.
|
| If you asked engineers to do a large amount of manual
| testing, they would either (a) tell you they were and then
| secretly automate it, (b) simply not do it, or (c) quit.
| Ensorceled wrote:
| I think these are all examples of teams that can have
| concrete goals: accessibility teams can be enforcing a
| standard or regulation (like WCAG), privacy teams can be
| auditing for compliance to GDPR or COPPA, security teams can
| be monitoring for security updates, pen-testing, and auditing
| for compliance to standards like PCI DSS.
|
| "Responsible Innovation" is kind of nebulous and I would
| expect it to be problematic unless given clear rules of
| engagement and supplied with a strong leader.
|
| That said, somebody needs to be responsible for dealing with
| the endless stream of products that are deeply flawed out of
| the box: builtin racism, builtin stalking/harassment tools,
| "0-day" doxing of existing users ...
| UncleMeat wrote:
| Consider privacy. I'd wager that most people on HN would
| prefer it if major social media and advertising companies
| were doing more than waiting for legislation and
| implementing the bare minimum. "Privacy" in the aggregate
| is vague rather than concrete. Similarly, you could imagine
| a team who is responsible for addressing actual policy
| concerns around things like amplifying sectarian violence
| and _also_ having a broader and more vague mission around
| "harm".
| Ensorceled wrote:
| I think most privacy teams are responsible for compliance
| to legislation and, hence, have clearly defined, concrete
| goals. Are you disagreeing?
| detaro wrote:
| One would hope the goal of a privacy team would be to
| work to improve and maintain privacy, even if they are
| not strictly legally compelled to do so in the specific
| case.
| UncleMeat wrote:
| No. I am saying that privacy teams often go beyond that
| and that this "beyond" is in the vague and ill defined
| territory.
| PragmaticPulp wrote:
| Good security and privacy teams do a lot of work to establish
| good practices and build frameworks and platforms for others
| to build upon.
|
| The bad security teams just show up and expect to say
| criticize everything and stop other teams without providing
| workable alternatives.
|
| The problem with having a team with a vague charter
| ("responsible innovation") with unclear scope (e.g. not just
| security or just privacy) is that it's really hard for them
| to feel like they're building foundations for others to build
| upon. It's too easy for them to fall back to roles where they
| see themselves as the gatekeepers and therefore backseat
| drivers of other teams. It becomes a back door for people to
| insert themselves into power structures because nobody is
| entirely sure where their charter begins and where it ends.
| UncleMeat wrote:
| Of course. But good security teams also say "no you can't
| do that even though it would be the fastest path to
| market." And looking at a team that says "no" and saying
| "ugh these people aren't builder they are just getting in
| the way" is a great way to end up with a company that is
| irresponsible towards their users.
|
| We can talk about whether this particular team is
| effective. We can talk about unclear scope (though, IMO,
| the scope here is no less clear than "privacy"). But the
| complaint that teams that put barriers in front of other
| teams are always bad is insufficient at best and downright
| dangerous at worst.
| strix_varius wrote:
| > But the complaint that teams that put barriers in front
| of other teams are always bad is insufficient at best and
| downright dangerous at worst.
|
| Nobody made that complaint.
|
| Please don't misrepresent my statement, which was
| specifically, "to justify your own existence, you have to
| put blockers in front of people who are actually trying
| to build things."
|
| Good security teams build & they help others build as
| well. They build secure-by-default platforms, secure
| common libraries, and guidelines for others to follow
| with confidence. Working with a good security team means
| you're confident _your product will get to market faster_
| because, at launch time, you know you 'll be green-lit
| since you followed the org's security best practices.
|
| That kind of team doesn't have to invent blockers in
| order to justify its own existence.
| neon_electro wrote:
| You should edit your statement to include the word
| "invent" - that goes a long way towards describing the
| blockers you talk about as invalid, made up, not relevant
| to real business constraints, etc.
|
| Definitely helps you make your point.
| joshuamorton wrote:
| So then the followup question is, based on _what_ do you
| make the claim that the team under discussion didn 't
| build anything?
|
| You admit that teams in this space can and do often build
| tools to make things secure by default or private by
| default, so why couldn't this team be involved inbuilding
| "responsible" by default?
|
| Your argument here is basically that you believe security
| and privacy "blockers" are inherently of value, so a team
| enforcing them isn't 'inventing' blocks, but other teams
| are.
| hn_throwaway_99 wrote:
| Very much disagree. _Good_ security teams, _good_ privacy
| teams and _good_ accessibility teams should understand that
| their highest priority should always still be to _ship
| product_. Their job should just be to make sure that the
| product that ships is secure, ensures privacy and is
| accessible.
|
| The difference between these things and something like a
| "Responsible Innovation Team" is that the goals of the latter
| are often poorly defined and amorphous. What does
| "Responsible Innovation" really even mean? But contrast that
| with, for example, security goals, which are primarily "don't
| get breached". Security folks, privacy folks and
| accessibility folks should all have a very well-defined job
| to do. For the other "PR-focused" teams, they usually have to
| make work up to justify their jobs.
|
| Note that, obviously, it's also possible to have _bad_
| security people, for example, who only see their job as to
| throw up roadblocks and demand that your security checklist
| is ever-growing. I 'd even say these are the majority of
| "security" people. But the good security people who are able
| to integrate themselves successfully into your product
| development processes are worth their weight in gold.
| unicornhose wrote:
| It seems that you could also see the responsible innovation
| team in the same light -- 'the product that ships should be
| innovating in a responsible way'.
|
| Also IME none of these things are binaries -- a product
| could be more or less secure, privacy-respecting, and
| accessible, just as it could be seeking to innovate more or
| less responsibly. Different aspects of security, privacy,
| accessibility, and responsible innovation will be important
| at different times, depending on what's happening in the
| world.
|
| The jobs are never as well-defined as you'd imagine.
| lefstathiou wrote:
| IMO, security and accessibility are more objective ends than
| "responsible innovation" which can be plagued by bias and
| personal agendas.
| LewisVerstappen wrote:
| marricks wrote:
| Gotta love mocking people who were just fired.
| spamizbad wrote:
| Pretty much any time a major tech company announces layoffs
| the peanut gallery shows up talking about how everyone
| getting cut is lazy/dead weight.
|
| In reality it's far more likely they had the misfortune of
| being assigned to teams that weren't core to the business'
| operation or driving revenue or some Exec/VP lost a
| political battle and their department got the axe.
| hinkley wrote:
| We are very hypocritical that way. You can get upvotes
| for being glad you were laid off, you can get upvotes for
| saying the people who got laid off were bad people
| anyway.
| drewbeck wrote:
| It's a classic post hoc rationalization that happens when
| folks are fully committed to loving business and bosses
| over actual people's well-being. It's gross.
| wpietri wrote:
| For sure.
|
| To be fair, organizations select for people who love the
| business and its leaders over people who are willing to
| point out when the emperor has no clothes. Sort of the
| same way that in subcultures where abuse is common, the
| people who remain are selected for those who will support
| abusers and blame the abused.
|
| So I get how it happens, and why people like to victim-
| blame as a way of making themselves feel better about
| their choices. But it's still thoroughly gross.
| danrocks wrote:
| So when is the appropriate time to mock them? When they are
| employed making high six figures to invent harms and then
| ways to reduce them?
| drewrv wrote:
| Why are you insisting they're inventing harms? Do you
| have evidence to support this? Are you just opposed to
| the idea of ethics?
| pessimizer wrote:
| I have a creepy feeling that this entire thread has been
| infested by Facebook insiders who are working on
| something that is awful, and lobbied and heckled to get
| the people fired who were telling them it was awful. No
| one else would have this sort of venom.
| Morgawr wrote:
| How about we just don't mock people?
| danrocks wrote:
| Mockery is a very powerful tool to speak truth to power.
| SpicyLemonZest wrote:
| I'm not a fan in general, but in this context understanding
| the team's mindset seems relevant to understanding what the
| practical effects of this cut will be. Should we expect
| Facebook to innovate less responsibly in general, or was
| this team working towards alignment with specific cultural
| attitudes? For example, the article describes how the team
| stopped Facebook Dating from adding a race filter and saw
| that as one of their proud accomplishments - this seems
| like more of a values judgment than a question of
| responsibility, since it's not a question that's anywhere
| near settled in broader society.
| throwayyy479087 wrote:
| It's kind of nice to watch people who call you names for a
| large amount of money get fired, yes.
| unethical_ban wrote:
| Attacks on character should have justification in the
| comment, not merely a reference to a Twitter existence.
| fknorangesite wrote:
| > these people are also nutcases.
|
| In what way?
| danrocks wrote:
| "Zvika Krieger is a subversive ritualist and radical
| traditionalist who is passionate about harnessing ancient
| wisdom to create modern meaning, fostering mindfulness and
| authentic connection amidst digital distraction, and
| bridging the sacred and profane. Zvika is the Spiritual
| Leader of Chochmat HaLev, a progressive spiritual community
| in Berkeley, CA for embodied prayer, heart-centered
| connections, and mystical experiences. "
|
| One of them is an apparent cult leader, for example.
| JumpCrisscross wrote:
| > _subversive ritualist and radical traditionalist_
|
| This sounds like the person I'd put in charge of a
| department designed to run in circles.
| InitialLastName wrote:
| If they were a deacon in their church, you wouldn't blink
| at it.
| MichaelCollins wrote:
| Many people in this industry would blink at that. I
| recall a whole lot of people on HN blinking at the
| christian beliefs (among other things) of the guy at
| Google who said the text generator was alive.
| worker767424 wrote:
| His resume has some impressive accomplishments, but has
| enough BS that it makes me suspicious of what teaching
| courses on ethical design actually entails.
| evr1isnxprt wrote:
| JumpCrisscross wrote:
| Nothing in the comment you're replying to mentioned software
| engineers. They claim people building things are unlikely to
| spend time in "responsible innovation" teams. That's true if
| one's building rockets, medicines or cat GIF sorters.
| evr1isnxprt wrote:
| speakfreely wrote:
| > But building the software is not hard.
|
| This reminds me of 400lb men who can't walk up the stairs
| without getting winded yelling "are you kidding me???" at the
| TV when they see NFL quarterbacks throw a bad pass.
| strix_varius wrote:
| It's weird that you assume "builder" means "programmer." At a
| healthy software company, _most_ roles are builder roles:
|
| - designer
|
| - product manager
|
| - marketer
|
| - sales
|
| - customer success
|
| Each of these roles delivers something additive. They make
| the product easier or nicer to use, make it address the
| market better, or even just make people aware that it exists.
| Marketers build complex promotional structures to gain users.
| Salespeople put together actual accounts paying actual money
| so the product can continue to improve. Customer success
| helps people use the product, and a good CS team filters
| high-signal feedback back to the product & engineering orgs.
|
| People in all of these roles (and more) are building the
| product. If their roles were cut, the product would be worse:
| uglier, slower, less useful, less used, less purchased, and
| less successful.
|
| However, there _are_ roles that attract folks who _don 't_
| want to build. It's smart to get rid of them.
| evr1isnxprt wrote:
| You can try to sell me on it emotionally but it isn't going
| to happen.
|
| Economic trade is just something humans _do_. Babbling
| about it in rambling fluffy semantics is not what gives
| rise to economic activity.
|
| A lot of these builders should build themselves into fuller
| better rounded people than office workers who memorized an
| emotional caricature they use to extract attention from
| others. HR, marketing, etc exist to insulate financiers,
| give off an air of building big ideas, but it's layers of
| indirection so the financier can avoid real work.
|
| The majority don't have to import your relative emotional
| context anymore than they have to import a preachers
| religion.
|
| That's what's going down with the economy; the majority are
| tired of the over hyped coddled office worker. This is just
| the start of the barnacle shedding. Source? Data models
| forecasting economic "plot lines" I collate with others to
| inform investment choices, not a bunch of high emotion
| semantic gibberish intended to sell me on a relative
| perspective.
|
| SEs and otherwise, too many are addicted to an emotional
| confidence game that is sapping agency in other contexts.
| This is low hanging fruit being shed, but ML will come for
| content creators and SEs sooner than most expect; it's all
| electron state in machines. Generating data/code as a job
| will be automated; that is a stated goal of many powers
| that be in tech.
|
| I'm not the only one who sees it; in a 2 year old interview
| Gabe Newell was calling ML content creation an extinction
| level event that's right around the corner.
|
| More and more "builders" of hype and ephemera are going to
| be receiving the wake up call as the months and years
| progress. Building layers of indirection as a career is not
| long for this world.
|
| But more simply; "software builders" is not language I see
| as constrained to SE. Like you said it takes a village to
| build a software business. I think you inferred a bit much.
| marricks wrote:
| Yep, blockers like:
|
| - should we do this?
|
| - who do we hurt by doing this?
|
| - oh god people are hurting why are we still doing this?
| jayd16 wrote:
| Already a pretty bad list. It should at least be "how can we
| do thing we want to do in a responsible way."
|
| You can guarantee no harm by doing nothing but that's not a
| good enough answer.
| pessimizer wrote:
| If I consult with you on how to kill my wife in a
| responsible way, I hope you'll tell me that there's no way
| for me to kill my wife in a responsible way.
| mint2 wrote:
| Here's a good example where engineers needed a team like
| this:
|
| - should we name this device we want to put in every house
| after one of the most common American female names?!
|
| engineers and ceo: I see no issue with that!
|
| Several millions people named Alexa now have everyone from
| toddlers to their friends yelling their name ordering them to
| do stupid tasks and "Alex stop" repeatedly.
|
| The name cratered in popularity for good reason.
|
| Yet Amazon still has not renamed their dumb speaker
| mateo411 wrote:
| I'm pretty sure that a Product Manager made the final call
| on the name of the device. Some DS nerds might have given a
| list of names that could be used and presented some stats
| on the accuracy of the device recognizing the name, but the
| PM probably made the final call.
| MichaelCollins wrote:
| I think you're misplacing blame for this, I don't think the
| engineers are the root cause of this problem. Why don't
| these devices let people set their own custom codephrases?
| I suspect that wouldn't fly with management/marketing etc,
| who want to create a marketable brand out of the codeword.
| In fact I'm virtually certain that engineers at amazon
| weren't the ones who chose the name "Alexa" in the first
| place, that decision probably went through a dozen squared
| meetings and committees of marketers and PR people.
| neon_electro wrote:
| They have since 2021, expanding the set to "Alexa",
| "Amazon", "Computer", and "Echo". [0]
|
| Nonetheless, defaults are incredibly powerful. [1]
|
| [0] https://www.rd.com/list/how-to-change-alexas-name/
| [1] https://www.nngroup.com/articles/the-power-of-
| defaults/
| danrocks wrote:
| Alexa was only the 32nd most popular name in the US for
| girls. A little over 6k babies were named Alexa in the US
| prior to the speaker's launch.
|
| The "Alexa stop" thing, is it a real or invented harm?
|
| My name happens to match the lead character of Karate Kid
| and I constantly asked to do the crane pose when I was 7.
| Doesn't seem to have traumatized me.
| MichaelCollins wrote:
| > _Alexa was only the 32nd most popular name in the US
| for girls._
|
| 32nd most popular is not exactly obscure. Why did they
| have to give the computer a human name in the first
| place? Probably because it helps people form some sort of
| parasocial relationship to the product, which is gross,
| but probably good for business.
| danrocks wrote:
| I used to work there, after the launch (and therefore, of
| the name). One of the reasons given was that Alexa was a
| distinctly suitable word for proper recognition by the
| ASR model embedded on the device software.
|
| Also probably because it was good for business.
| mint2 wrote:
| The popularity of the name cratered after launch. That
| doesn't signify anything to you?
|
| That anecdote is nice but honestly it sounds like you
| survived a less frequent and more temporary somewhat
| similar but much milder version so now you're stating
| everyone else needs to get over themselves?
| danrocks wrote:
| What does it signify? Annoyance? Yes. Harm, such that it
| required a committee of enlighted priests to have blocked
| it? Where is the evidence for it?
| mint2 wrote:
| For illustrative purposes: Dan stop! Dan stop! Dan set
| timer for 50 minutes. Timer set for 15 minutes. Dan stop!
| Dan cancel timer! Dan set timer for Fiifttyy minutes" Dan
| turn off kitchen light" Dan set thermostat to 68. Dan
| play Music.
|
| Your name is now Kleenexified to mean robotic female
| servant, no harm!
|
| It doesn't seem like you googled or looked into how
| people named Alexa actually feel before pronouncing how
| they should feel.
|
| This comment chain really shows why these responsibility
| vetting teams are needed, a lot of corporate workers are
| not empathetic or considerate beyond their immediate
| siloed task and assume everyone should react exactly the
| same as they did to only very tangentially similar
| experiences.
| nh23423fefe wrote:
| inventing harms to prevent like arsonist firefighters.
| slantedview wrote:
| And here I thought it was a given that everyone understood
| the harm of social media, for example, to children and
| teenagers.
| SpicyLemonZest wrote:
| That's exactly why I'm skeptical of whether a team like
| this could have been addressing real harms. If I heard
| about a nutrition team at Frito-Lay, I would assume
| they're working on nonsense until proven otherwise,
| because how could you meaningfully improve nutrition
| under the constraint that your company needs to sell lots
| of potato chips?
| danrocks wrote:
| https://en.wikipedia.org/wiki/Think_of_the_children#:~:te
| xt=....
| kemotep wrote:
| We have verifiable evidence of Facebook as a platform
| being used to instigate genocide[0] among other issues.
| Dismissing a concern that a platform could be used for
| harm against children as fallacious reasoning is a
| fallacy fallacy if you have no additional points to add
| to the discussion as to why you feel that is relevant.
|
| [0]: https://www.bbc.com/news/world-asia-46105934
| noptd wrote:
| I'm no fan of Facebook but I have a hard time
| understanding why Facebook is singled out for this. If
| what FB did is illegal, then they can be charged for
| their crimes.
|
| However if we're critizing from purely a moral
| standpoint, how is this any different than claiming that
| cell phone carriers should be preventing this type of
| thing over phone calls or texts?
|
| For the record, I don't find that to be a convincing
| argument either but it's the inconsistency of perspective
| that irks me.
| kemotep wrote:
| That's fair but again the post in question was
| essentially responding with "fallacy" and no further
| comment or context.
| danrocks wrote:
| The Rwandan genocide was spawned by radio propaganda from
| RTLM. Classifying social media as especially harmful to
| children when damage can be made from any sort of mass
| media is disingenuous.
| nh23423fefe wrote:
| But they actually care about
|
| > [getting] the Facebook dating team's ... to avoid
| including a filter that would let users target or exclude
| potential love interests of a particular race
|
| you see, you have to just ignore those people in the
| feed, you can't filter them, its better and not racist
| that way. and who knows, you might become not racist if
| you see a pretty girl/boy you like, but actually that's
| probably just racist fetishizing
|
| responsible innovation is doing the same dei doublespeak
| nosianu wrote:
| Did you come up with the list or do you actually know what
| that team actually _did_? I think only the latter provides a
| valuable entry point for discussion. In the case of the
| former, we end up fighting each others imagined scenarios -
| and imagination is limitless. It sure leads to a lot of
| discussion, none of it bound by reality though.
|
| It would be nice, but would require an inside person to dare
| make potentially traceable (by their contents alone) public
| comments so it's unlikely, to know what the team actually
| tried to do, did do, and what it achieved. Without actual
| facts the discussion will just end up a free-for-all.
| wpietri wrote:
| Funny that the "you don't actually know" critique comes in
| response not to the nakedly disparaging post that kicked
| this off, but the comment arguing for responsible software
| development.
|
| We of course don't know enough of the specifics, because
| Facebook works hard to keep it that way. But we do know
| that Facebook has a body count. If you're looking for a
| "valuable entry point for discussion", maybe start with
| Facebook's well-documented harms.
| paganel wrote:
| And because I Ctrl+F-ed it and couldn't find anything,
| one of those documented harms is the Rohingya Genocide.
| Putting this here so that we know what we're talking
| about.
|
| Seeing devs non-ironically complain about internal
| departments like this one which was set up in order not
| to let that happen again kind of saddens me a lot. No,
| productivity and work is not more important than making
| sure that that work doesn't enable a genocide campaign in
| a specific corner of the world.
| thegrimmest wrote:
| I think people fundamentally disagree here. I would
| attribute the entire (read 100%) responsibility for the
| Rohingya Genocide or other similar events to the
| perpetrators. Facebook is just another tool, and its
| creators bear no more blame for the actions of their
| users in the real world than the manufactures of the
| vehicles driven by the Burmese military.
| drc500free wrote:
| Facebook could have chosen to be a completely neutral
| platform. They could have followed the ISP model, making
| Facebook another platform like email, RSS, or http. They
| just had to not make editorial decisions - leave the feed
| sorted by recency, and only remove illegal material. This
| is what safe harbor provisions assumed a company would
| do, allowing platforms who simply pass information
| between parties to avoid liability for that information.
|
| But they wanted to be valued higher, so they explicitly
| chose to instead step into the editor's role and became
| responsible for the content on the platform.
| PuppyTailWags wrote:
| There are laws put the onus on banks to proactively
| determine that their services aren't used to fund
| terrorism and multiple funky/opaque processes in banking
| specifically for that purpose. It kind of makes sense to
| me to have social media companies upheld to a similar
| standard that they are not used to organize terrorism or
| other war crimes.
| thegrimmest wrote:
| I'm not sure I agree with these laws. They are very
| difficult to actually enforce to an objective standard.
| They also transfer the burden of law enforcement away
| from police departments and on to private organizations.
| What it translates to in practice is a bunch of (mostly
| irrelevant) mandatory training for employees, and an
| approval from an auditor who isn't very familiar with the
| business. I think police (and no one else) should do
| policing.
| riversflow wrote:
| You realize that other groups already do policing? IRS,
| EPA, OSHA, USPS, hell the secret service will get
| involved if you are forging currency.
| [deleted]
| PuppyTailWags wrote:
| > What it translates to in practice is a bunch of (mostly
| irrelevant) mandatory training for employees, and an
| approval from an auditor who isn't very familiar with the
| business.
|
| In the context of ensuring a bank doesn't transfer money
| to terrorists, this is completely wrong. Banks have a
| whole list of operations and processes, and failing this
| is enforced by actual jail time. This is why "know your
| customer" exists in banking. In the context of terrorism,
| there is no police enforcement regarding terrorists;
| often, we are talking _military_ enforcement.
| thegrimmest wrote:
| Yes, but my point is that "transferring money to someone"
| is not a _true crime_. It doesn 't have a _victim_. And
| yes, our governments should use military /diplomatic
| channels to fight terrorism directly - that's what
| they're _for_.
| PuppyTailWags wrote:
| > Yes, but my point is that "transferring money to
| someone" is not a true crime. It doesn't have a victim.
|
| This is also incorrect. Transferring money to a terrorist
| organization is a crime because countries have declared
| it illegal. Of course well-funded terrorists have victims
| and enablement of well-funded terrorism has clear
| victims.
|
| And yes, the government is using military and diplomatic
| channels to fight terrorism directly-- by ensuring the
| resources they have access to are limited.
| wpietri wrote:
| There are two possible things you could mean by that.
|
| One is that you think a lot of things just shouldn't be
| enforced, and that we should allow a lot more harm than
| we do now. Genocide?
|
| The other is that you think we should have a lot more
| police to take over the harm-reducing regulatory actions
| now in place. That we should take the tens, maybe
| hundreds of thousands of social media moderators now
| working, but make them government employees and give them
| guns.
|
| I can't decide which is more cartoonishly evil.
| thegrimmest wrote:
| I don't see why people working in an office need guns,
| but yes, enforcement of laws should be done by... law
| enforcement. This isn't too controversial really, if I
| make a credible threat to someone online, it's a criminal
| matter for the police. Just as if I had sent it in the
| mail. The same should be true for all other types of
| crime (fraud, money laundering, etc.). Police should (and
| do) conduct investigations and arrest offenders.
|
| Social media moderators exist to protect the public image
| of the platform, and enforce community guidelines. They
| should not be burdened with law enforcement simply
| because we can't be arsed to do proper police work.
| wpietri wrote:
| Responsibility is not zero sum. The people who pulled the
| triggers? Responsible. The people who gave those people
| the orders? Responsible. People who told them where to
| find the people they killed? Responsible. Arms dealers
| who sold them the guns? If they had any inkling that this
| was a possible outcome, then they share in the
| responsibility. The people behind the legacy media
| efforts that whipped people into a frenzy? Responsible.
| And so on.
|
| Facebook is not like a screwdriver, a simple, neutral
| tool that is occasionally used to harm. Facebook is an
| incredibly complex tool for connecting humans, a tool
| with all sorts of biases that shape the nature and
| velocity of social interaction.
|
| People have known for decades that online communication
| systems enable harm. This was a well-understood fact long
| before Facebook existed. Facebook is morally responsible
| for that harm (as are the perps, etc, etc). Something
| they understand perfectly well because they do a lot to
| mitigate that harm while crowing about how socially
| responsible they are being.
|
| You might disagree with most ethicists on this, as well
| as with Facebook itself. But you'll have an uphill
| struggle. Even the example you pick, vehicles, doesn't
| work, because car manufacturers have spent decades
| working to mitigate opportunities for the tools they
| create to cause harm. Now that cars are getting smarter,
| that harm reduction will include preventing drivers from
| running the cars into people.
| politician wrote:
| All communication systems enable harm, and more generally
| all systems that allow people to interact enable harm. In
| the US, the true responsibility for regulating harms lies
| with the duly elected government exercising its
| regulatory powers on behalf of the people. It does not
| lay with the unelected unaccountable members of
| Responsible Innovation Teams and Online Safety Teams.
| This form of tyranny persists because the majority of our
| representatives established their power bases before the
| advent of the Internet. Hopefully, in the next decade or
| two, we will be able to effectively subjugate and
| regulate the King Georges of the large social platforms.
| thegrimmest wrote:
| Responsibility _is_ zero sum, or any conversations about
| it are meaningless. This is quite handily illustrated by
| your comment actually - where does it end? How much of an
| "inkling" must people have to carry responsibility? It
| doesn't sound like a question of fact exactly.
|
| The only way to objectively agree about responsibility is
| to use an actor/agent model, where actors are _entirely_
| responsible for their _actions_ , and only actions which
| _directly_ harm others are considered. Otherwise we 're
| discussing which butterflies are responsible for which
| hurricanes. I'm happy to be wrong here, but I just don't
| see an alternative framework that can realistically,
| objectively, draw actionable boundaries around
| responsibility for action. This by the way is the model
| that is used in common law.
|
| Facebook being a complex tool strengthens my point.
| Should providers of complex tools be responsible for
| every possible use of them? Is it not possible to provide
| a complex tool "as is" without warranty? Wouldn't
| constraining tool makers in this way be fundamentally
| harmful?
|
| > _online communication systems enable harm... Facebook
| is morally responsible for that harm_
|
| _Everything_ can be seen to "enable harm". Facebook
| being morally responsible is not a statement of fact,
| it's an opinion. Facebook's actions to mitigate are a) to
| evade/delay regulatory action, b) to maintain their
| public image or c) by a small group of activist
| employees. Only a) and b) align with their fiduciary duty
| to shareholders.
| wpietri wrote:
| > Responsibility is zero sum, or any conversations about
| it are meaningless.
|
| That's incorrect. I'd suggest you start with Dekker's
| "Field Guide to 'Human Error", which looks at airplane
| safety. Airplanes are only as safe as they are because
| many different people and groups see themselves as
| responsible. Your supposedly "objective" model, if
| followed, would drastically increase deaths.
| politician wrote:
| "The National Transportation Safety Board is an
| independent federal agency charged by Congress with
| investigating every civil aviation accident in the United
| States and significant accidents in other modes of
| transportation -- highway, marine, pipeline , and
| railroad."
|
| The reason airplanes are safe is because the government
| is doing its job to regulate the space.
| wpietri wrote:
| That might be _a_ reason, but it is not _the_ reason.
| politician wrote:
| Prior to the establishment of NTSB in the Air Commerce
| Act of 1926, aviation accidents were common [1]. Congress
| determined that it was necessary to establish an
| investigation and regulation framework immediately at the
| beginning of the era of commercial aviation and this has
| been enormously successful. Many times Congress does not
| act fast enough to prevent harms (meat packing, banking,
| pharmaceuticals), but when they do get around to doing
| their job safety improves. Individual companies must be
| compelled to act subordinate to federal or state
| regulatory frameworks, and to not act as vigilantes.
|
| [1] https://en.wikipedia.org/wiki/List_of_accidents_and_i
| ncident...
| wpietri wrote:
| Oh, so post hoc ergo propter hoc?
|
| I'm not saying the NTSB isn't important. But it's far
| from the only reason that we have fewer crashes.
| Government regulation can be helpful in improving
| standards, but they set a minimum bar. Aviation's high
| levels of current safety are a collaboration between
| many, many people. Starting with individual pilots,
| engineers, and maintenance techs, going up through
| collaborative relationships, through companies and civil
| society organizations, and up through national and
| international regulatory agencies. All of these people
| are _taking responsibility_ for safety.
| thegrimmest wrote:
| Every aviation accident post mortem I've read assigns
| finds a finite list of causes and contributing factors.
| Each system/person has strong ownership that is
| _responsible_ for making the recommended changes. These
| post mortem reports also _explicitly_ do not find civil
| or criminal _liability_ - that is a zero-sum process.
| wpietri wrote:
| Liability is indeed strict zero sum. But you're confusing
| that with moral responsibility, which isn't.
|
| They serve two different purposes. The former comes out
| of a zero-sum, adversarial setting where the goal is to
| figure out who pays for a past harm. The latter comes
| from a positive-sum collaborative setting where everybody
| is trying to improve future outcomes.
|
| If I release a new product tomorrow, I'm responsible for
| what happens. As are the people who use it. But if
| somebody dies, then liability may be allocated only to
| one of us or to both in some proportion.
|
| Again, read Dekker.
| thegrimmest wrote:
| "Responsibility" is semantically a bit nebulous, but
| seems to me much more related to "liability" than
| "continuous improvement". The question "Who is
| responsible?" reads a lot more like "Who is liable?" than
| "How did this bad thing happen?". If you release a new
| product, you may be accountable to your org for how it
| performs, but (IMO) you're not morally responsible for
| the actions of others using it. If your new product is a
| choking hazard you're not guilty of manslaughter.
|
| "morally responsible" ~= "guilty"
| sdoering wrote:
| > If your new product is a choking hazard you're not
| guilty of manslaughter.
|
| But you are still imho (morally) responsible for the
| deaths occurring out of the use of your product (this is
| where we would probably disagree). Even if you were not
| legally guilty.
|
| I like another example that to me clarified the
| distinction between these concepts better.
|
| Imagine one morning, you open your front door and find a
| baby having been placed there somewhen during the night.
| The child is still alive. You are not guilty in any way
| for the baby's fate, but now that you have seen it you
| are responsible for ensuring that it gets help. You would
| be guilty if you would allow it to freeze to death or
| similar.
| thegrimmest wrote:
| > _You would be guilty if you would allow it to freeze to
| death or similar._
|
| This significantly varies by jurisdiction, and isn't
| settled at all. I don't think being _present_ makes you
| _responsible_ either. Unappealing as it may seem, you
| should indeed be able to pull up a chair, have some
| popcorn, and watch the baby freeze. People should only
| bear obligations they explicitly consented to. I don 't
| think _anyone_ has the _moral authority_ to _impose_ such
| an _involuntary obligation_ on _anyone else_.
|
| Modelling society as a constrained adversarial
| relationship between fundamentally opposed and competing
| groups is more accurate than assuming there is "one team"
| that knows a fundamental "good" or "right" and that the
| rest of us just need to "see it". People who perform
| honour killing or preside over slavery are just as sure
| of their moral superiority as you are. What we need is a
| world where we can _coexist peacefully_ , not one where
| we are all united under one religion of moral
| correctness.
| wpietri wrote:
| Yeah, when I say that Facebook has a body count, I'm not
| kidding. Facebook touches people's lives in so many ways
| that it's hard to even estimate the total number. But it
| seems the barest ethical minimum to say, "Hey, is there a
| way to be sure this next feature is responsible for zero
| deaths?"
| Dracophoenix wrote:
| > "Hey, is there a way to be sure this next feature is
| responsible for zero deaths?"
|
| What makes you think this possible? I don't see where
| Facebook is particularly responsible here. Telephones and
| radios have been used to coordinate assassinations and
| genocides. Movies have been used to justify invasions.
| Why isn't anyone burning the effigies of Alexander Graham
| Bell, Guglielmo Marconi, and Thomas Edison?
| wpietri wrote:
| Sorry, explain to me how Marconi directly profited from
| the Rwandan genocide?
|
| In any case, perfection is rarely possible but often an
| excellent goal to aim for. For example, consider child
| car fatalities. We might not immediately get it to zero,
| but that is no reason to say, "Fuck it, they can always
| have more kids."
| [deleted]
| pessimizer wrote:
| I think it's safe to assume that the team did the thing
| that it was assigned to do and that it was named for. It's
| certainly makes more sense to discuss _that_ than to
| announce that we have no ability to discuss anything unless
| we were personally both on the team and managing the team,
| and were involved in making the decision to cut the team.
|
| edit: I'm sorry, we do get to discuss how they were
| probably wrecker nutcases stifling people who actually
| build things in order to make up for their own inadequacies
| and inability to do the same. It's only assuming that the
| ethics team worked on ethics that is out of bounds.
| jfengel wrote:
| An awful lot of people are making comments based on the
| assumption that such a team exists only to invent problems.
| It's worth at least one person interjecting that Facebook
| is causing at least some problems and that a team like this
| could have a place, even if nobody knows precisely what
| this team did.
|
| Many people are taking it for granted that Facebook should
| have no interest in reducing harm. I'm glad somebody pushed
| back on that.
| hedora wrote:
| My experience is that most people in engineering
| organizations are not sociopaths, but some are.
|
| The problem is trying to get people that think everything you
| listed is obvious and boring to spend 40 hours a week staying
| out of the way 99% of the time, but also being politically
| savvy enough to get a few CxO's fired each year.
|
| Also, since the office is normally doing nothing (unless the
| office has already completely failed), the people in it need
| to do all of that, and continuously justify their
| department's existence when their quarterly status updates
| range from "did nothing" to "cut revenue by 5% but
| potentially avoided congressional hearings 5 years from now"
| to the career-killing "tattled to the board and tried to get
| the CEO fired".
|
| If you know how to hire zero-drama, product- and profit-
| focused people that can effectively do that sort of thing,
| consider a job as an executive recruiter.
| [deleted]
| CivBase wrote:
| Sometimes that's what you need.
|
| EDIT: I don't know about these specific people or whether the
| blockers they put in place are justified. However, I've
| definitely worked with developers who obviously prefer to just
| write code until something works with little regard to the
| reliability, efficiency, security, or maintainability of their
| output. I spent almost all of my time cleaning up messes rather
| than getting to build anything new myself. It's a terrible
| experience.
| commandlinefan wrote:
| > regard to the reliability, efficiency, security, or
| maintainability
|
| Yeah, that's not what this team was responsible for, though.
| That's what an enterprise architecture team _ought_ to be
| responsible for, but rarely are.
| scifibestfi wrote:
| They are bullshit jobs for the overproduced elite who demand
| high status. Now they are going to find out these are only
| viable in the bull market.
| klyrs wrote:
| Are you saying that the "builders" at facebook are salt-of-
| the earth plebeians? That has not been my experience.
| ceres wrote:
| >overproduced elite How many elite should there be?
| danrocks wrote:
| Enough to staff all Starbucks outlets in the country. /s
| scifibestfi wrote:
| Roughly speaking, the amount that can be absorbed into the
| power structure.
|
| https://en.wikipedia.org/wiki/Elite_overproduction
| JumpCrisscross wrote:
| > _How many elite should there be?_
|
| The number we have today minus those unemployed minus those
| on responsible innovation teams. (Using the broad
| definition of elites, which captures pretty much everyone
| who went to college.)
| mikotodomo wrote:
| What?? The internet is extremely toxic. If you don't have
| supervision while building your app, it will attract the most
| horrifying people like this one time someone almost convinced
| me that the age of consent should be abolished but I just
| realized they were abusing my naivety and gaslighting me. We
| need platforms developed lawfully and keep making sure users
| are liable to prevent that.
| archagon wrote:
| Maybe if the builders stopped building shitty things, we
| wouldn't need teams like this.
| dgs_sgd wrote:
| Well put. Intentions can start off genuine, where the non
| builder truly believes the processes and barriers they set up
| improve building (and early on it's usually true), but it
| easily morphs into a tool for maintaining power and influence
| regardless of the value add.
| mind-blight wrote:
| I strongly disagree. I worked at an analytics company that had
| and needed a privacy ethics department. They were great. It was
| a mix of lawyers, philosophers (degree in the subject), and
| former developers.
|
| They consistently thought about the nuances of baking privacy
| into a product in a way that I didn't have the background or
| time for. Every time I worked with them, they helped me build a
| better product by helping me tweak the specs in a way that had
| minimal impact on our users (unless they were a bad actor) and
| strongly increased privacy protections. It was like having a
| specialized PM in the room
| mariojv wrote:
| What did the philosophers do on a day to day basis? I am
| curious, sounds like an interesting role.
| ajcp wrote:
| I would think they'd more accurately be described as
| "ethicists" and probably fulfilled a function closer to a
| Business Analyst. Reviewing and transmitting requirements,
| gathering use-case data, reviewing standards & procedures,
| and working on "fit and function" type stuff.
| strix_varius wrote:
| > Every time I worked with them, they helped me build a
| better product by helping me tweak the specs in a way that
| had minimal impact on our users (unless they were a bad
| actor) and strongly increased privacy protections. It was
| like having a specialized PM in the room
|
| It sounds like they were focused on building product too,
| then, which is not at all what this is about.
| Kiro wrote:
| > philosophers
|
| Sounds like something from Silicon Valley, the TV show.
| m-ee wrote:
| I don't remember the source but there was a linked article
| here a while ago about how to effectively manage "blockers"
| like this. The example given was a med device company where
| the engineers hated the regulatory department who always told
| them no. Manager decided to actually increase engagement and
| bring on one person from the reg side to their meetings. At
| some point a person said "well we'd love to do this but no
| way would the FDA allow us to do it." The regulatory employee
| basically said "do you even know the regulations? There's a
| very easy pathway to doing exactly that."
|
| My boss once did a similar thing with the quality department.
| Suddenly we sailed through our DFMEAs. Some people do live to
| block others, but some are trying to do an important job.
| Engagement usually pays more than just whinging.
| mertd wrote:
| I don't know what this team is or did. However, to paraphrase
| Jeff Goldblum from Jurassic Park, if everyone is preoccupied
| with building things, who will be the one that stops and thinks
| whether they should?
| agentdrtran wrote:
| 'blockers' like "have moderation staff speak the language of
| the country you operate in"? feature velocity is not an
| inherent good
| aaronbrethorst wrote:
| 'Move fast and incite genocide' https://www.theguardian.com/t
| echnology/2021/dec/06/rohingya-...
| thegrimmest wrote:
| Yep, Facebook incited genocide - not the actual people who
| made the posts - Facebook. Lay the blame at the feet of the
| perpetrators, not the makers of the tools they used. This
| is like blaming Mercedes for the holocaust because they
| made Hitler's car(s).
| PuppyTailWags wrote:
| I'm really confused by this because there are most
| certainly cases where companies are responsible not to
| provide goods or services to war criminals or terrorists,
| by law. So it's already established that yes, you can
| blame the makers of the tools that perpetrators used.
| thegrimmest wrote:
| I wouldn't conflate the offloading of policing functions
| on to private corporations with the assignment of moral
| responsibility.
| agentdrtran wrote:
| Mercedes didn't have an easy way to detect and shut off
| violence they just decided not to use. I understand the
| point you're trying to make, but following that logic,
| should there be no moderation anywhere, for anything
| then, if it's ultimately only the poster's fault if they
| post something illegal like "let's all go to bob's house
| at 123 lane and kill him?"
| aaronbrethorst wrote:
| As it works out, I would say it's more in the vein of
| blaming IBM for their culpability in the Holocaust, not
| Mercedes, although clearly Meta's role was nowhere near
| as awful as IBM's.
| https://en.wikipedia.org/wiki/IBM_and_the_Holocaust
| thegrimmest wrote:
| Can I ask why? The Nazis could no more perpetrate the
| holocaust without cars than they could without computers.
| cletus wrote:
| 100% this.
|
| It's very much in the spirit of "the bureaucracy is expanding
| to meet the needs of the expanding bureaucracy". There's no
| better way for someone to justify their existence than creating
| problems but appearing to solve problems for other people.
| Compliance, security, etc. These things are usually necessary
| but it does attract a certain kind of person who seems
| predisposed to empire building through obstacle creation.
|
| You see the same thing with people who become moderators. I saw
| this years ago on Stack Overflow. You see it on Wikipedia. You
| see it on reddit. You have to be constantly vigilant that these
| people don't get out of hand. These people end up treating the
| process as the goal rather than the means to an end.
|
| I remember when the moderators starting closing SO topics as
| "not constructive". There were a ton of questions like "should
| I do X or Y?" and the mods decided one day to start closing
| these became there was no definitive answer. You can list the
| advantages and disadvantages of each option and list the
| factors to consider. That can be incredibly value. Can this
| descend into a flamewar? Absolutely. But just kill the
| flamewars and the people who start them. No point throwing out
| the baby with the bath water.
| btilly wrote:
| https://en.wikipedia.org/wiki/User:Guy_Macon/Wikipedia_has_C.
| .. comes to mind as another good example of this dynamic.
|
| For another, the ever expanding administrative overhead of
| colleges is a large contributor to driving tuition increases.
| See https://stanfordreview.org/read-my-lips-no-new-
| administrator... for an idea of how bad the problem is
| getting.
| Floegipoky wrote:
| > I remember when the moderators starting closing SO topics
| as "not constructive".
|
| I don't think that was a decision made by the mods, I think
| that's policy created by SO. I really like that it's so
| focused, clearly defines what is on and off topic, and
| strictly enforces that policy. You're right, such discussion
| can be very valuable, but if these policies didn't exist or
| weren't enforced, I don't think the site would be nearly as
| useful. Doesn't mean that the people with closed questions
| are bad, or the conversation can't be had in chat or a
| different site.
|
| > Can this descend into a flamewar? Absolutely. But just kill
| the flamewars and the people who start them.
|
| Meanwhile the strong contributors get tired of dealing with
| (even more) toxicity and move on, the signal:noise ratio
| drops, and it turns into just another (ostensibly)
| programming-focused forum. Not to mention that the more time
| mods spend dealing with toxic behavior, the less time they
| have for work that improves the content of the site.
| cletus wrote:
| > I don't think that was a decision made by the mods, I
| think that's policy created by SO.
|
| There was little to no direction from the top, at least at
| that time. Mods were just self-appointed community members
| who organized in such a way to impose their will on the
| community. It was pretty classic tyranny of the minority.
| Many of the more prolific contributors (of which I was one
| at the time) were deeply frustrated by this. It was a
| frequent topic on Meta SO.
|
| You saw these waves of subtle policy changes if you tracked
| edits to your contributions. Suddenly there'd be a bunch of
| edits all based on some new formatting policy and a lot of
| it was just pointless busywork.
|
| Some people just fall into this trap of elevating the
| process to where the process becomes a goal and is treated
| of equal (or even higher) value to the content it's applied
| to. Moderation is important and has a place but, left to
| their own devices, moderators will create work for
| themselves just based on their idle capacity. You need to
| avoid this.
| commandlinefan wrote:
| > get tired of dealing with (even more) toxicity
|
| And then "toxicity" just gets redefined to an always
| leftward-moving definition of anything "right-wing". Every
| time.
| jarjoura wrote:
| This is always a vicious cycle that starts with good intent but
| spirals into an oppressive regime.
|
| 1) It's starts after the product launch post-mortem 2) Team
| creates a process to improve avoiding future mistakes 3)
| Process innocently includes ability to hard block any product
| changes without someone or some group's approval 4) Politics
| emerges behind the scenes from those with social capital to get
| around the hard blocks 5) New process to make things "more
| fair" by adding more hard blocking rules, usually resulting in
| lots of pointless meetings and paperwork to justify new process
| and show upper management that things are working as intended
| 6) More politics as teams start hiring for people who are able
| to get around the heavy process by being "good at the game" 7)
| New team is formed around the process because it's now a full
| time job 8) People good at politics join this new team and
| everyone else is pulled into pointless meetings and handed
| paperwork 9) Rinse and repeat
|
| The problem I've always observed is that these systems are put
| in place with the best intent to offer guidance so that
| company, employees and consumers can all benefit by building
| the right best thing. However, the systems are given blocking
| abilities that after time do more damage than good.
|
| You'd think, if something were not blocking no one would
| listen, and maybe you're right. I definitely have no idea how
| teams can avoid this spiral trap, however, I don't think it's a
| bad thing to sunset process like this after a while. Just like
| Google is willing to throw away products on a whim, we should
| be willing to throw away process teams on a whim.
| benjaminwootton wrote:
| The idea is that those blockers or objections are valid.
| Something is going wrong if they continually raise issues which
| do not meet the organisations goals.
|
| That said, it does sound like a strange function to allocate to
| a specific team.
| __derek__ wrote:
| IME, they don't have the authority to invent/implement
| blockers. Instead, they attend/hold conferences, give talks,
| and write documents that nobody acts on. In theory, it's a good
| way for a company to advertise/lobby to the think-tank set.
| Clearly that hasn't played out for FB, hence disbanding the
| team.
| groby_b wrote:
| Good god, as if "builder" was the only type of person that has
| value.
|
| Most "builders" are, if not constrained, like cancer cells.
| Unlimited consumption & growth without a counterbalance. In
| order to justify their own existence, they need to continue to
| churn out things without concern of what they consume, what use
| their product has.
|
| Maybe, just maybe, there's value in having different kinds of
| personalities - unchecked by itself, pretty much every tendency
| is ruinous.
| jason-phillips wrote:
| People who produce things are cancer. Got it.
|
| Also, false dichotomy there saying a "responsible innovation"
| team is needed to limit the building of products with no use;
| that's a solved problem. We have markets for that.
| unicornhose wrote:
| The people who choose to build things (it's not a binary
| quality, everyone can build, some are better) are not
| taking moral responsibility for the things they are
| building.
|
| It is hurting society. What is to be done? That is the
| issue at the heart of this headline.
| klyrs wrote:
| Thomas Midgley Jr. was an impressively prolific builder.
| Navigators are just as important as drivers. You need both,
| not either in exclusion.
| opmelogy wrote:
| Sometimes putting blockers in front of people is absolutely
| what's needed. It's a common tactic for compliance and security
| - both of which can completely tank your company if not handled
| correctly. I suspect it's especially true if there are people
| with the skills to build who claim they are the only ones that
| are "actually trying to build things." Likely they don't have
| the right level of understanding of what needs to be built.
| thomassmith65 wrote:
| If the goal is to stay in business long term, thoughtless
| 'building' isn't worth much.
|
| Steve Jobs: "focus is about saying no"
|
| Mark Zuckerberg: "move fast and breaks things"
|
| Facebook is the world's least-trusted, most-reviled brand. They
| earned it by building garbage.
| concinds wrote:
| This isn't a surprise.
|
| Meta's headcount grew 32% in one year, and revenues went down 1%
| YoY in Q2. Wall Street expects it to go down even more YoY in Q3.
| Layoffs usually happen in September since that's when budgets are
| set. So they objectively overhired, expect to shrink, are now
| just laying off non-revenue generating divisions.
|
| Some can argue there's no such thing as non-revenue generating
| divisions. Every division contributes, and this one could have
| helped with the Meta brand, public perception, user retention,
| etc. But the real way you solve that (as AI researchers have
| solved) is not having a firefighting team that's expected to fix
| everything; it's instilling the proper behaviors, processes and
| culture within each and every team in the company. Having these
| discrete "divisions" serves PR goals more than anything, and
| that's expensive PR.
|
| This is exactly like Steve Jobs laying off the Advanced
| Technology Group. Did that signal a "neglect" of R&D? No.
|
| I personally believe that even Meta management is far more
| pessimistic about its stock than Wall Street is, and that's why
| they're playing it careful. Wall Street expects a 10% growth in
| revenue in 2023. Usercount isn't growing. So FB needs to cut
| spending and focus on core product and increasing revenue per
| user. Straightforward.
| colpabar wrote:
| "Group was put in place to address potential downsides of the
| company's products; Meta says efforts will continue"
|
| should really say
|
| "Group was put in place to make people think the company cares
| about the downsides of the company's products; Meta says
| something meaningless about continuing efforts"
| bhhaskin wrote:
| I bet they are having to take a long hard look at where they are
| spending their money. A team like that is great when money is
| plentiful, but a drain when it's not.
| Mountain_Skies wrote:
| There's been a lot of talk from Zuckerberg lately about there
| being too much cruft at the company. I wonder if he's feeling
| pressure from investors and the tech world in general over his
| big bet on the Metaverse. If he can make the company leaner, he
| might be able to reduce some of the pressure.
| Mountain_Skies wrote:
| Did they have power to create necessary changes to ameliorate the
| identified harms?
| ethanbond wrote:
| Of course! So long as they don't affect business objectives.
| Overtonwindow wrote:
| Maybe they found it was hopeless...
| adam12 wrote:
| Did they form a new team to deal with the issues the "Responsible
| Innovation Team" discovered?
___________________________________________________________________
(page generated 2022-09-09 23:00 UTC)