[HN Gopher] Atlassian Cloud ToS section 3.3(I) prohibits discuss...
       ___________________________________________________________________
        
       Atlassian Cloud ToS section 3.3(I) prohibits discussing performance
       issues
        
       Author : dmitriid
       Score  : 286 points
       Date   : 2021-01-02 16:35 UTC (6 hours ago)
        
 (HTM) web link (www.atlassian.com)
 (TXT) w3m dump (www.atlassian.com)
        
       | Quarrel wrote:
       | From a Tableau terms of service:
       | 
       | https://www.tableau.com/tlc-terms
       | 
       | 3(g): "publicly disseminate information regarding the performance
       | of the Tableau Beta Products."
       | 
       | 3(f) is also identical, but they otherwise don't look that
       | similar.
       | 
       | They're both > $50B companies, so you'd think they could get
       | their lawyers to use different verbage..
        
       | uncledave wrote:
       | As a user without a contract I'd like to point out that the
       | performance is universally shitty :)
        
       | simon1ltd wrote:
       | Nothing says "great product" like a ToS that bans you from
       | discussing the problems.
       | 
       | I self-host a confluence install, and it's performance is poor
       | even with a good sized VM and absolutely zero other traffic to
       | it.
        
         | odiroot wrote:
         | It's the Oracle school of public relations via litigation.
        
         | Buttons840 wrote:
         | "I won't comment on the performance, but let's just say there's
         | a reason their TOS forbids talking about it."
         | 
         | If you ban objective criticism, I guess all you're left with is
         | FUD.
        
         | icameron wrote:
         | Yep Confluence is an amazingly overly complicated thing! I had
         | it on modern windows server that boots up in 60 seconds. From
         | that time, confluence starting up take no less than 10 minutes
         | before it responds to web requests. Fortunately we switched to
         | Teams this year, no more exorbitant confluence renewal fees and
         | clueless support.
        
           | jjuhl wrote:
           | "Yep Confluence is an amazingly overly complicated thing" -
           | yeah,with an incredibly slow editor that screws up even
           | simple page edits constantly. It's a complete shit show.
           | There's a reason we call it "cuntfluence" at my workplace.
        
           | trevorishere wrote:
           | SharePoint Server is an absolute beast with a bunch of legacy
           | code yet it has no issues being responsive as soon as it JITs
           | (which can take ~30 second so or so after an App Pool
           | startup).
           | 
           | SPO is also responsive (the last thing to load is typically
           | the SuiteNav which doesn't impact working with the
           | site/content).
           | 
           | I'm not sure why a company like Atlassian would have these
           | persistent performance issues.
        
       | NotChina wrote:
       | So much for SLAs?
        
         | tehwebguy wrote:
         | We replaced our SLAs with NDAs!
        
       | dalrympm wrote:
       | I'm curious, I'm a Cloud customer and I can tell you that the
       | service is incredibly slow even for a small scale setup (2 Jira
       | Projects and 3 Confluence Workspaces). There's an insane amount
       | of network requests seemingly for mouse tracking. By telling
       | everyone here, that Atlassian Cloud products are insufferably
       | slow, am I violating the ToS?
       | 
       | I was actually thinking about doing a write up on the issues I've
       | had but this seems to make me think I should do so AFTER I find
       | someplace else to go. Right now GitHub is the likely destination
       | but would love to hear other suggestions.
        
         | isodev wrote:
         | JetBrains recently announced Space and it looks pretty cool
         | (and fast): https://www.jetbrains.com/space/
        
           | fullstackwife wrote:
           | "looks cool and fast" - DMC DeLorean ?
        
           | foepys wrote:
           | Currently cloud-only but an on-premises version has been
           | announced. So while Atlassian goes cloud-only, Jetbrains is
           | going in the other direction.
        
           | dalrympm wrote:
           | I'm a heavy JetBrains user but I've found their server side
           | offerings to be lacking. I haven't been a fan of YouTrack in
           | the past but maybe it's time to give them another try.
        
         | confluence_perf wrote:
         | Sorry to hear it's been a frustrating experience. I'm a PM for
         | Confluence Cloud and we're always trying to make it better.
         | Would you be willing to share more specifics, such as: - Pages
         | with content X are the slowest - Trying to do A/B/C is
         | annoyingly slow - etc ?
         | 
         | (edit: looks like HN is severely limiting my reply rate so
         | apologies for delays)
         | 
         | We're trying to focus on frustrating pages/experiences rather
         | than number of network calls and such, because while the latter
         | is correlated, the former (frustrating slow experiences) is
         | really the bar and the higher priority.
         | 
         | In terms of the ToS I'm not from legal so can't say (still
         | looking into it), but have definitely had conversations with
         | users on public forums about performance issues, and afaik no
         | one has been accused of violating their ToS.
         | 
         | (edit: since I can't reply due to HN limits I'll try to add
         | some stuff in this edit)
         | 
         | ------- @plorkyeran "target things that are easier to fix than
         | those with highest impact" -> this is a good point and
         | something we're trying to do. Engineers know the former (easier
         | to fix) pretty readily, but identifying "highest impact"
         | requires some work, so I'm (as a PM) always trying to find out.
         | It's of course some combination of these two (low hanging
         | fruit, high impact items) that forms the priority list.
         | 
         | ------ @igetspam (moved followup into a reply to trigger
         | notification)
         | 
         | ------@core-questions "perf to take over company for 6mo-1yr"
         | I'm not in the position to make that level of decisions, but
         | can certainly pass the feedback up the chain. The perf team is
         | trying their best though, so any info anyone can provide us can
         | help us apply our resources in the right place
        
           | michaelt wrote:
           | I did a test for you just now. I have 100Mbps internet, 32GB
           | RAM, 4ghz i7 processor and suchlike. To make it easy for
           | Jira, I'm doing this at a weekend, late at night, during the
           | new years holiday so the servers shouldn't be busy.
           | 
           | On a cloud-based classic software project (which has less
           | than 200 issues) opening a link to an issue it takes 4.8
           | seconds for the page to complete rendering and the progress
           | bar at the top of the screen to disappear.
           | 
           | Opening a Kanban board with 11 issues displayed? 4.2 seconds
           | for the page to load.
           | 
           | Click an issue on the board? 2.5 seconds for the details to
           | pop up.
           | 
           | Close that task details modal - literally just closing a
           | window? 200 milliseconds. Not to load a page - just to close
           | a modal!
           | 
           | In case I'm being hard on cloud Jira by insisting on using a
           | classic project, I also checked with a 'Next-gen software
           | project' with less than 2000 issues.
           | 
           | I click a link to view a particular comment on an issue. 4.8
           | seconds until the issue, comment and buttons have all loaded.
           | 
           | I choose to view a board? 9.9 seconds from entering the URL
           | to the page load completing.
           | 
           | I'm viewing the board and I want to view a single issue's
           | page. I click the issue and the details modal pops up - and
           | just as I click on the link to the details, the link moves
           | because the epic details have loaded, and been put to the
           | left of the link I was going for, causing me to click the
           | wrong thing. So this slow loading is a nontrivial usability
           | problem.
           | 
           | View a single issue, then click the projects dropdown menu.
           | The time, to display a drop-down menu with three items? 200
           | milliseconds.
           | 
           | This is what people mean when they say the performance
           | problems are everywhere - viewing issues, viewing boards,
           | viewing comments, opening dropdowns, closing modals? It's all
           | slow.
           | 
           | And if you imagine a backlog grooming meeting that involves a
           | lot of switching back and forth between pages and updating
           | tickets? You get to wait through a great many of these
           | several-second pageloads.
        
             | davb wrote:
             | That tallies with the experience I had using Jira cloud a
             | few years ago. It sounds like it's still a great case study
             | in how not to architect an issue tracker.
        
             | confluence_perf wrote:
             | Hi michaelt,
             | 
             | Thank you for the numbers -> I agree these are slow, and I
             | can guarantee you that the Jira team is working on it
             | (though I can't talk about details). These numbers are
             | definitely outside of the goals.
             | 
             | I appreciate the call out of "page to complete rendering
             | and the progress bar at the top of the screen to disappear"
             | and "until the issue, comment and buttons have all loaded".
             | In a dream world of course, everything would load in < 1s
             | (everything drawn, everything interactive), but working our
             | way down to that will take time.
             | 
             | We're currently looking at each use case to understand the
             | '(a) paint faster vs (b) interactive faster' tradeoff and
             | trying to decide which cases the user has a better
             | experience with (a) or (b). In Confluence this is clearer
             | in some places than in others, but in Jira it's less clear
             | I think (I work on Confluence, I probably shouldn't speak
             | for Jira specifics).
             | 
             | It always comes down to a limitation of resources though,
             | which is why we're always hoping to get as specific
             | feedback as possible.
        
               | orangecat wrote:
               | Hopefully this demonstrates that the anti-performance-
               | discussion ToS clause is harmful not only to your
               | customers but to you as well. You're getting useful
               | information here only because some people are willing to
               | openly violate it.
        
               | Judgmentality wrote:
               | Not to mention the reputational damage from people asking
               | "why the hell is this in the contract in the first
               | place?"
               | 
               | It says they're so afraid of the quality of their product
               | they'd rather litigate their customers than fix their
               | product.
        
               | jabberwcky wrote:
               | I hate to pile on a thread where you're already taking a
               | lot of flack, but this point is really important to the
               | future of Atlassian:
               | 
               | > In a dream world of course, everything would load in <
               | 1s (everything drawn, everything interactive),
               | 
               | As a contractor, I have more or less walked out of or
               | refused interviews on discovering Atlassian toolset was
               | in use. It's not because I hate your tooling (it is
               | visually nice and very featureful), it's because the
               | culture that delivered this software is antithetical to
               | anything I look for in a software project I want to use
               | or contribute to. How can I possibly do my job to any
               | degree of satisfaction when I'm tracking work in a tool
               | that requires 15 seconds between mouse clicks? That is
               | the reality of Jira, and as a result I refuse to use it,
               | or work for people who find that acceptable, because it's
               | a "broken window" that tells me much more about the
               | target environment than merely something about suboptimal
               | bug trackers.
               | 
               | Your page budget should be 100ms max, given all your
               | tools actually do are track a couple of text fields in a
               | pleasing style. Whoever the architecture astronauts are
               | at Atlassian that created the current mess, flush them
               | out, no seat is too senior -- this is an existential
               | issue for your business.
        
               | jerf wrote:
               | "'(a) paint faster vs (b) interactive faster'"
               | 
               | It is only a tradeoff if you're at the Pareto optimality
               | frontier [1] for those two things.
               | 
               | I _seriously_ doubt that you are. You should absolutely
               | be able to have more of both.
               | 
               | I would recommend to you personally two things: Open the
               | debugger, and load a page with an issue on it in any
               | environment. Look at the timeline of incoming resources,
               | not just for how long the total takes but also all the
               | other times. You will learn a lot if you haven't done
               | this yet. It will be much more informative than anything
               | we can tell you.
               | 
               | Second, once an issue is loaded, right click on almost
               | anything in the page (description, title, whatever) and
               | select "Inspect Element". Look at how many layers deep
               | you are in the HTML.
               | 
               | I also find it useful to Save the Web Page (Complete)
               | once it's all done rendering, then load it from disk with
               | the network tab loaded in the debugger. It can give a
               | quick & dirty read on how much time it takes just to
               | render the page, separate from all network and server-
               | side code issues.
               | 
               | I have a bit of a pet theory that a lot of modern
               | slowdown on the web is simply how much of the web is
               | literally dozens and dozens of DOM layers deep in
               | containers that are all technically resizeable (even
               | though it is always going to have one element in it, or
               | could be fixed in some other simple way), so the browser
               | layout engine is stressed to the limit because of all the
               | nested O(n) & O(n log n) stuff going on. (It must not be
               | a true O(n^2) because our pages would never load at all,
               | but even the well-optimized browser engines can just be
               | drowned in nodes.) I don't have enough front-end
               | experience to be sure, but both times I took a static
               | snapshot of a page for some local UI I had access to that
               | was straight-up 2+ seconds to render from disk, I was
               | able to go in and just start slicing away at the tags to
               | get a page that was virtually identical (not _quite_ ,
               | but close enough) that rendered in a small fraction of a
               | second, just with HTML changes.
               | 
               | My guess is that fixing the network issues will be a
               | nightmare, because the 5 Whys analysis probably lands you
               | at Conway's Law around #4 or #5. But, assuming you also
               | have a client-side rendering issue (I don't use JIRA
               | Cloud (yet) but I can vouch that the server product
               | does), you may be able to get some traction just by
               | peeling away an engineer to take a snapshot of the page
               | and see what it takes to produce a page that looks
               | (nearly) identical but renders more quickly. That will
               | not itself be a "solution" but it'll probably provide a
               | lot of insight.
               | 
               | [1]: https://news.ycombinator.com/item?id=22889975
        
               | BillinghamJ wrote:
               | > In a dream world of course, everything would load in <
               | 1s
               | 
               | It's important you understand that "everything loading in
               | <1s" would still be unacceptably slow - that is still an
               | order of magnitude too slow.
               | 
               | That is not "a dream world" - not even close. A well
               | built tool like this, meeting standard expectations (i.e.
               | table stakes), would hit <50ms for the end user - the
               | vast majority of the time. A "dream world" would be more
               | like 10ms.
               | 
               | You should be targeting <200ms for 99% of user-facing
               | interactions. That is the baseline standard/minimum
               | expected.
               | 
               | This is why people are saying the company needs to make a
               | major shift on this - you're not just out of the ballpark
               | of table stakes here, you're barely in the same county!
               | 
               | It cannot be overstated how far off the mark you are
               | here. There's a fundamental missetting of expectations
               | and understanding of what is acceptable.
        
               | confluence_perf wrote:
               | Hi BillinghamJ,
               | 
               | You're right, I apologize for not being clear. We're
               | targeting 1s for "Initial loads" on new tabs/new
               | navigation, which I assume you're referring to. Our
               | target for 'transitions' is different.
               | 
               | If however the numbers you're referring to are "initial
               | load" numbers, then I'm not sure.
               | 
               | (edit: and action responses again are also a separate
               | category. Our largest number of complaints are about
               | 'page load times' in Confluence, so most conversations
               | center around that)
        
               | launderthis wrote:
               | no dont give into this guy ... this is done over the net.
               | The rate of transfer has to be taken into account.
               | Unacceptable is a measure of comparison.
               | 
               | Unacceptable to who, you have a faster provider for
               | cheaper, with as many features???
               | 
               | Im pretty sure he doesnt because if he could he would go
               | there. There are tradeoffs and Atlassian has many project
               | they are working on. They understand that there is room
               | for improvement in performance. Its one of Atlassian's
               | priorities, it is a tech company (a pretty good one I
               | would say).
               | 
               | I guess one question is about server redundancy. Where is
               | this guy loading from and where is the server he is
               | loading from? Getting things below 1s is nearing the
               | speed of the connection itself. Also at that speed there
               | is deminishing returns. Something that happens at 1s vs
               | .5s doesnt make you twice as fast when you dont even have
               | the response time to move your mouse and click on the
               | next item in .5s.
               | 
               | Sometimes techies just love to argue. You are doing great
               | Atlassian and have tons of features. But maybe it is time
               | to revisit and refactor some of your older tools.
        
               | BillinghamJ wrote:
               | You've shown poor understanding here.
               | 
               | > Getting things below 1s is nearing the speed of the
               | connection itself
               | 
               | That is absolutely false. Internet latency is actually
               | very low - even e.g. Paris to NZ is only about 270ms RTT,
               | and you _do not_ need multiple full round trips to the
               | application server for an encrypted connection - on the
               | modern internet, connections are held open, and initial
               | TLS termination is done at local PoPs.
               | 
               | For services like this - as they are sharded with
               | customer tenancy - are usually located at least in the
               | same vague area as the customer (e.g. within North
               | America, Western Europe, APAC etc).
               | 
               | For most users of things like Atlassian products, that
               | typically results in a base networking latency of <30ms,
               | often even <10ms in good conditions.
               | 
               | Really well engineered products can even operate in
               | multiple regions at once - offering that sort of latency
               | globally.
               | 
               | > Im pretty sure he doesnt because if he could he would
               | go there
               | 
               | Yeah, we don't use any Atlassian products - partly for
               | this reason. We use many Atlassian-comparable tools which
               | have the featureset we want and which are drastically
               | faster.
               | 
               | > when you dont even have the response time to move your
               | mouse and click on the next item in .5s.
               | 
               | There is clear documented understanding of how UX is
               | affected by with various levels of latency -
               | https://www.nngroup.com/articles/response-
               | times-3-important-...
               | 
               | > Sometimes techies just love to argue
               | 
               | Not really, I have no particular investment in this - I
               | don't use any Atlassian product, nor do I plan to even if
               | they make massive perf improvements.
               | 
               | But I do have an objective grasp - for tools like this -
               | of what's possible, what good looks like, and what user
               | expectations look like.
               | 
               | > no dont give into this guy
               | 
               | I don't expect Atlassian is going to make any major
               | decisions entirely based on my feedback here, but it is
               | useful data/input for exploration, and I do feel it's
               | right to point out that they're looking in the wrong
               | ballpark when it comes to the scale of improvement
               | needed.
        
               | petters wrote:
               | Initial loads should definitely be be <100ms as well.
               | 
               | But Jira currently is so slow that 1s would be a great
               | improvement. I am using it at work and regret it,
               | unfortunately.
        
               | BillinghamJ wrote:
               | As a first step, 1s would be better than nothing for
               | sure, but you need to be working towards a much tighter
               | goal on a 1-2 year timeframe.
               | 
               | New load, you should really be hitting 200ms as your 95th
               | percentile - 300ms or so would be decent still.
               | "Transitions" should hit 100ms 95th, 150ms would be
               | decent.
               | 
               | If you did hit 100ms across the board, you'd be rewarded
               | by your customers/users psychologically considering the
               | interactions as being effectively instantaneous. So it
               | really is worth setting a super high bar as your target
               | here (esp given you need a bit of breathing room for
               | future changes too).
        
               | confluence_perf wrote:
               | Thank you for coming back and clarifying. Do you happen
               | to have links to any public testing results of other
               | tools, or guidance to this specificity - would love to
               | use them to build a case internally
               | 
               | Most of what we've seen online are nowhere near this
               | level of detail (X-ms for Y-%ile for Z-type of load)
               | 
               | (edit: clarified request)
        
               | BillinghamJ wrote:
               | I'm afraid I'm no expert on project management tools!
               | 
               | On what users experience as effectively "instantaneous",
               | that's from experience on UX engineering and industry
               | standards - https://www.nngroup.com/articles/response-
               | times-3-important-...
               | 
               | On the other noted times, they're just a general range of
               | what can be expected from a reasonably well-built tool of
               | this nature. Obviously much simpler systems should be
               | drastically faster, but project management tools do tend
               | to be processing quite a bit of data and so do involve
               | _some_ amount of inherent "weight", but that isn't an
               | excuse for very poor perf.
               | 
               | That said, I imagine if your PMs do some research and go
               | ahead and try using some of the common project management
               | tools, you should get a good idea. ;) Keep in mind speeds
               | to Australia (assuming Atlassian is operated mostly
               | there?) will likely show them in a much worse light than
               | typical perf experienced in the US/UK/EU areas.
               | 
               | The time to first load is derived from the fact that
               | you're running essentially the equivalent of many
               | "transition" type interactions, but they should be run
               | almost entirely in parallel, so roughly 2x between
               | "transition" and "new load" is a reasonable allowance.
        
               | NicoJuicy wrote:
               | > A "dream world" would be more like 10ms.
               | 
               | A gateway solely adds 50 ms. So I'm not really sure where
               | you get your numbers/benchmarks from... They are
               | unrealistic
        
               | BillinghamJ wrote:
               | What gateways have you been using?! That's a long, long
               | way off on the modern internet. Assuming you mean
               | gateways as in the lines you'd see on a traceroute, more
               | typical might be ~2-5ms on a home router, ~0.5-1.0ms
               | upstream.
        
               | NicoJuicy wrote:
               | Lol, wasn't expecting that :p
               | 
               | Ocelot would be a better example of an gateway
               | https://github.com/ThreeMammals/Ocelot
               | 
               | Used for scaling up web traffic or creating bff's (
               | backends for frontends)
        
             | lmilcin wrote:
             | See, the irony of this is that you are just publicly
             | sharing performance numbers which undeniably show a pattern
             | of performance issues. It also doesn't seem to be possible
             | without you first accepting ToS.
             | 
             | Ooops!
        
               | launderthis wrote:
               | who says this is an "issue" its just numbers. If you
               | think its an issue thats your interpretation. For
               | instance I used jira for communicate with my team about 3
               | projects and it only took me 3 hours.
               | 
               | Maybe this person is writing a fiction story where the
               | protagonist is using Jira and they are detailing how they
               | spend their day.
               | 
               | Its like a John Steinbeck novel
        
               | lmilcin wrote:
               | May I point you to the title of this submission?
               | 
               | "Atlassian Cloud ToS section 3.3(I) prohibits discussing
               | performance issues"
               | 
               | No sane judge would agree with your interpretation.
        
               | [deleted]
        
           | igetspam wrote:
           | Know what would be great? Markdown support. The WYSIYG is
           | full of bad assumptions and has been forever. In the
           | beginning, we could at least opt out but that's long gone. I
           | actively encourage companies I consult for to use anything
           | but confluence because it seems to be designed specifically
           | for the lowest common denominator with no allowance for
           | people who work faster with a keyboard.
        
             | rhencke wrote:
             | How does this relate to the performance issues being
             | discussed?
        
               | freedomben wrote:
               | Not GP, but I'd say because it's heavy, and if you didn't
               | _have_ to use it (as there are alternative methods) then
               | you don 't pay the performance penalty.
        
               | runlevel1 wrote:
               | In the case of Confluence specifically, it makes the cost
               | of experimentation a lot steeper.
               | 
               | Confluence's WYSIWYG editor will often make changes that
               | can't be reversed with "undo" -- especially those
               | involving indentation. Copy-paste frequently screws up
               | its formatting as well.
               | 
               | So if you don't want to risk losing lots of work, you
               | have to make many smaller changes. With each change
               | taking a few seconds, it adds up quickly.
               | 
               | If it was markdown or some extended set of it, that
               | wouldn't be a problem.
        
               | confluence_perf wrote:
               | We're trying to collect and fix such occurrences, so if
               | you have something with specific repro steps please send
               | them to me and I'll make sure they get to the right team.
        
               | runlevel1 wrote:
               | The WYSIWYG editor makes it extremely difficult to give
               | repro steps because formatting information is hidden from
               | the user and is not perfectly preserved during copy-
               | paste.
               | 
               | More generally, the issues I see reported only ever seem
               | to be fixed in the Cloud version. I currently have to use
               | the Data Center version.
               | 
               | Why go through the trouble of reporting an issue I'll
               | never see fixed involving a feature that I loathe using?
        
               | throw_jirauser wrote:
               | > More generally, the issues I see reported only ever
               | seem to be fixed in the Cloud version. I currently have
               | to use the Data Center version.
               | 
               | > Why go through the trouble of reporting an issue I'll
               | never see fixed involving a feature that I loathe using?
               | 
               | I find the same issue with Nessus Professional. They too
               | are trying to funnel everyone into using tenable.io (SaaS
               | nessus scanner). And they also are very resistant in
               | doing much of any changes (other than removing the API)
               | from their onprem solution. But tenable.io keeps getting
               | regular updates.
               | 
               | Worse yet, when you talk to anybody there, the first
               | thing is "Why arent you using tenable.io ?" My response
               | every time is, "Has it been fedramped YET?".. Of course
               | it hasn't. Not entirely sure they even plan on doing
               | that.
               | 
               | For maintaining data integrity and preserving a secure
               | environment, these companies are demanding that we open
               | our networks and store our critical data somewhere we
               | really don't have access to.
        
               | MandieD wrote:
               | I always write my stuff in Word first, then paste into
               | Confluence.
        
               | runlevel1 wrote:
               | I do that too when creating a new page.
               | 
               | Editing an existing page is more perilous. I sometimes
               | write out my changes in plaintext first, and then paste
               | and format it into the doc piece by piece. Even that can
               | get janky, though.
               | 
               | It's like Confluence is punishing my attempts to write
               | documentation.
        
               | ratww wrote:
               | Don't know if that's what OP was thinking, but a WYSIWYG
               | editor is normally slower than a plaintext editor, so it
               | could make the product more bearable.
        
               | etimberg wrote:
               | Half the UI uses textile and half the UI uses Markdown.
               | Shipping two parsers & renderers for the app makes it a
               | lot bigger.
        
             | confluence_perf wrote:
             | followup questions: what level of support are you looking
             | for, and what would be sufficient?
             | 
             | 1) markdown macro (limits markdown to the body content, and
             | not interacting with other macros or styling)
             | 
             | 2) copy/paste markdown -> autoconvert to WYSIWYG (limits
             | markdown to copy/pasting, so no editing Markdown inside)
             | 
             | 3) "markdown pages" (something like 'this page is Markdown
             | only, no WYSIWYG)
             | 
             | I make no comments/promises on any of these becoming real
             | but looking for what's most valuable
        
               | chrisseaton wrote:
               | Just normal Markdown for all edit fields! GitHub manages
               | to have a single (almost) Markdown format everywhere.
               | Could we have that in all Atlassian products. There's no
               | need for your own format.
        
               | epage wrote:
               | Not who you are respondin to but some thoughts
               | 
               | 1. Don't be Slack / Teams :)
               | 
               | What I mean by this is if you support it, support it
               | correctly rather than "markdown as keyboard shortcuts"
               | that some products do where if you make the simplest of
               | editing changes, your "markdown" isn't recognized.
               | 
               | 2. Devs want to consistently use Markdown across their
               | entire suite
               | 
               | At my last job, we were switching to Azure DevOps. They
               | support Markdown in the developer workflow (PRs) but
               | rich-text-only for Tasks (to be non-dev friendly?).
               | 
               | At my current job, I'm involved in developer productivity
               | and have been interviewing people to collect input for
               | where we want to take development. We currently use
               | Phabricator which uses Remarkup. This is a source of
               | frustration because its not quite the markdown they use
               | everywhere else.
               | 
               | From these, I'm thinking Markdown Pages would be the top
               | choice since it allows developer-only interactions and
               | marketing-only interactions to stay within what they are
               | comfortable with,
        
               | confluence_perf wrote:
               | Thanks for the detail! Point (1) is very interesting
               | 
               | on (2) -> it sounded like you're saying (a) 'across
               | entire suite identically is good', but that also (b) 'if
               | there's a clear separation of Dev vs nonDev apps/docs,
               | having different support is okay', did I read that right?
               | please correct me if wrong.
        
               | saagarjha wrote:
               | As a programmer, being able to have something like what
               | GitHub does (Markdown text, you can preview what the
               | rendered version looks like) would be great. I assume
               | that people who are less technical would like a more
               | WYSIWYG editor but as long as the ability to write
               | straight Markdown is there I am sure that I wouldn't
               | care. Oh, and make sure it's actually a reasonable subset
               | of Markdown, not like Discord or Slack where you support
               | bold and italics but mot much else.
        
               | confluence_perf wrote:
               | Thanks! I'm not sure this is the most doable thing (I'm
               | not familiar with the technical aspects of the editor
               | storage format) but can definitely discuss with that
               | team.
               | 
               | The "reasonable subset of Markdown" is also a very useful
               | specific detail, exactly the kind of specificity that
               | helps us do our jobs.
        
               | Kinrany wrote:
               | There's CommonMark, a standard for Markdown. You could
               | make that your goal, instead of feature parity with any
               | of the proprietary implementations like Github.
        
             | freedomben wrote:
             | Yes, agreed. I get that confluence is supposed to be for
             | everybody, not just programmers, but there are plenty of
             | great WYSIWYG implementations that use markdown under the
             | hood (and make that accessible). If I could clone the
             | underlying git repo (or an abstraction or something) like
             | Gollum allows, I'd advocate for confluence everywhere. As
             | it stands now I advocate for either markdown/asciidoc in
             | the code, or gitlab wiki (which allows doing that).
        
           | dmitriid wrote:
           | I think this answer from two months ago should give all the
           | insight you ever want:
           | https://news.ycombinator.com/item?id=24818907
           | 
           | And the first answer to your comment in this thread profiling
           | performance for an empty page with almost no data on a small
           | project should give you even more data than you ever would
           | want.
           | 
           | And this one for an _empty project_ :
           | https://news.ycombinator.com/item?id=25616069
           | 
           | However, having personally experienced "upgrades to Jira and
           | Confluence experiences" over the past few years, I can safely
           | say: no one at Atlassian gives two craps about this. All the
           | talk about "We are definitely working on generalized efforts
           | to make 'most/all/everything' faster" is just that: talk.
           | There's exactly zero priority given to performance issues in
           | lieu of flashy visual upgrades which only make the actual
           | experience worse.
           | 
           | > We're trying to focus on frustrating pages/experiences
           | rather than number of network calls and such, because while
           | the latter is correlated, the former (frustrating slow
           | experiences) is really the bar and the higher priority.
           | 
           | Exactly: you aren't even trying to understand what people are
           | telling you. These metrics you ask for and then dismiss
           | entirely _are the primary, core, systemic reason_ for
           | frustrating slow experiences that you pretend are  "high
           | priority". No, frustrating slow experiences have not been a
           | high priority for years (if ever).
           | 
           | If you need to do 200 requests and load 27.5 MB of data to
           | display an empty page, _therein lies your problem_. You, and
           | other PMs at Atlassian fail to understand these basic things,
           | and yet we get platitudes like  "performance is our top
           | priority". It is _not_. You 're good at hiding information
           | and buttons behind multiple layers of clicks, each of which
           | needs another 200 requests and 5-15 seconds to execute.
           | 
           | Oh. You're also good at adding useless crap like this:
           | https://grumpy.website/post/0TcOcOFgL while making sure that
           | your software is nigh unusable:
           | https://twitter.com/dmitriid/status/888415958821416960 I
           | imagine all performance tickets get dismissed because no one
           | can see the description even on a 5k monitor
        
           | X-Istence wrote:
           | Other commenters have hit on this already, but the worst one
           | that bites me all of the time is this one:
           | 
           | 1. I click a link to an issue 2. I need to do something on
           | that issue, so I attempt to click on a particular section to
           | go make a modification 3. Bam, some background script has
           | loaded, some new piece of content was shoved in, and what I
           | clicked wasn't the thing I was expecting to click
           | 
           | Also, certain interactions within JIRA take far too many
           | steps, and each one takes far too long to load, so it makes
           | me dislike JIRA even more.
           | 
           | Project managers love JIRA, but engineers don't, because each
           | time you make us wait we are less inclined to deal with the
           | software that PM's need us to use so they know how things are
           | going, so instead we get more meetings. If JIRA were fast, we
           | could cut down on meetings.
           | 
           | Please make JIRA fast.
        
           | xtracto wrote:
           | > Would you be willing to share more specifics, such as: -
           | Pages with content X are the slowest - Trying to do A/B/C is
           | annoyingly slow - etc ?
           | 
           | I would... but Atlassian TOS prohibit me from doing so :(
        
           | Macha wrote:
           | It's hard to name a single action I can take in JIRA which
           | does not feel unacceptably slow. However, these are the
           | actions that cause the most issues for me due to being used
           | most often (JIRA datacenter, MBP 2019 with i7 + 32gb ram):
           | 
           | 1. Viewing a board. This can take 10+ seconds to load.
           | 
           | 2. Dragging an issue from one column to another. This greys
           | out the board, rendering it unreadable and unusable for 5-ish
           | seconds.
           | 
           | 3. Editing a field. I get a little spinner before it applies
           | for 2-3s for even the simplest edits like adding a label.
           | 
           | 4. Interacting with any board that crosses multiple projects.
           | A single project board is bad enough, as in point 1, but we
           | have a 5 project board that takes 20+ seconds.
           | 
           | Actually, I found an action that's pretty ok: Search results
           | are fast, even if clicking into any of them is not. I'm not
           | sure why rendering a board is so different performance wise.
        
             | confluence_perf wrote:
             | Thank you so much for the details! This is very helpful. I
             | will pass this along to my Jira Perf colleagues (there's
             | multiple of them, since they know Perf is such a big
             | issue).
             | 
             | Just to clarify on Search though, which search are you
             | talking about:
             | 
             | a) quick search (top bar) b) issue search (the one with the
             | basic/JQL switcher) c) something else
             | 
             | Trying to narrow down the latter "even if clicking into any
             | of them is not" part to understand which view that is
        
               | Macha wrote:
               | Doesn't Quick Search lead into issue search when you
               | press enter?
               | 
               | I think I mean issue search.
               | 
               | By clicking into them, I mean actually loading the issues
               | is slow.
        
           | TomVDB wrote:
           | If you honestly need details and specifics when Confluence
           | has always been a slow mess to the point of being unusable,
           | then maybe Atlassian needs a PM in charge of performance
           | metrics first and foremost?
        
           | miken123 wrote:
           | > Would you be willing to share more specifics
           | 
           | That's something you could easily figure out yourself. E.g.,
           | just grabbing some random JIRA:
           | 
           | https://hibernate.atlassian.net/jira/software/c/projects/HV/.
           | ..
           | 
           | Opening an issue in that tracker takes 24 seconds for me.
           | Twenty-four.
        
             | throw_jirauser wrote:
             | 31.2 seconds, on a not great internet connection (cell)
        
           | [deleted]
        
           | saagarjha wrote:
           | If you're being rate limited, try emailing the moderators at
           | hn@ycombinator.com and they might be able to help you.
        
           | ratww wrote:
           | _> Trying to do A /B/C is annoyingly slow - etc ? [...] We're
           | trying to focus on frustrating pages/experiences rather than
           | number of network calls_
           | 
           | It's not really a problem with a certain page or a certain
           | action: it's a systemic issue, that can only be solved with a
           | systemic change.
           | 
           | This has come up before here in HN [1]. From my point of
           | view, ignoring the issue around number of calls/performance
           | and all feedback regarding it is the root cause for the
           | slowness.
           | 
           | [1] https://news.ycombinator.com/item?id=24818907
        
             | confluence_perf wrote:
             | Hi ratww,
             | 
             | Thank you for reiterating this point, and I'll try to shed
             | some light on this. We actually are working on systemic
             | changes to try to make this lighter/better but I can't talk
             | about specifics until the feature is available.
             | 
             | On the other hand, any level of specificity is great, for
             | example: 1) full page loads are slower and more annoying
             | than Transitions (or, vice versa) 2) loading Home page is
             | slower and more annoying than Search Results (or, vice
             | versa) 3) waiting for the editor to load is more annoying
             | than X/Y/Z 4) etc....
             | 
             | Even systemic changes require individual work for applying
             | to these different views, so any level of specific feedback
             | would be helpful.
             | 
             | (also it looks like HN is limiting my reply rate so
             | apologies for any slowness)
        
               | [deleted]
        
               | skrap wrote:
               | If you want to make performance a feature, you need to
               | (in order!)
               | 
               | * define a metric
               | 
               | * measure it automatically with every commit
               | 
               | * define a success threshold
               | 
               | * make changes to get yourself under the threshold
               | 
               | * prohibit further changes which bring you above the
               | threshold
               | 
               | Just do it like that for pretty much every view in the
               | system.
        
               | confluence_perf wrote:
               | As recommended by another poster to take advantage of the
               | technical community here, I have one question and one
               | comment if you can provide more insights:
               | 
               | question a) My understanding is that performance numbers
               | fluctuate a LOT, even at sampling in the tens of
               | thousands. Do you have any recommendations of tools or
               | methods to reduce this variance?
               | 
               | comment b) we're definitely trying to do this but we're
               | not there yet - most of our metrics don't meet goals we
               | set. Instead the blocking goals must be 'don't make it
               | any worse', which is doable -> but it doesn't necessarily
               | make anything better yet (thus all the questions about
               | what is most annoying that we can fix first).
               | 
               | Hopefully point (b) is clear - I'm not saying "our
               | performance is great/good/acceptable", just the best I
               | can do (as a PM) is try to figure out what to prioritize
               | to fix.
        
               | lostdog wrote:
               | The high variance is another problem. Good software has
               | low variance in performance. Especially if you're
               | sampling in the tens of thousands.
               | 
               | The high variance does give you two tactical problems.
               | First, how do you keep performance from getting worse?
               | Typically you would set a threshold on the metrics, and
               | prevent checking in code that breaks the threshold. With
               | high variance you clearly cannot do this. Instead, make
               | the barrier soft. If the performance tests break the
               | threshold, then you need to get signoff from a manager or
               | senior engineer. This way, you can continue to make
               | coding progress while adding just enough friction that
               | people are careful about making performance worse.
               | 
               | The second problem of high variance is showing that
               | you're making progress. However, for you, this isn't a
               | real problem. You're not talking about cutting 500
               | microseconds off a 16 millisecond frame render. You need
               | to cut 5-25 __second __page loads down by a factor of 10
               | at least. There must be dozens of dead obvious problems
               | taking up seconds of run time. Is Confluence 's
               | performance so atrocious that you couldn't statistically
               | measure cutting the page load time in half?
        
               | confluence_perf wrote:
               | "High variance as a consequence of poor software" is an
               | interesting point and not one I'd considered -> I will
               | take this to engineering and see if we can do anything
               | about that (some components maybe, but we see high
               | network variances too which seem unlikely to be fixable).
               | 
               | Showing that we're making progress isn't as much of a
               | problem - similar to what you stated, the fixes
               | themselves target large enough value that it's measurable
               | at volume for sure, and even in testing.
               | 
               | The main issue is "degradations" -> catching any check-
               | ins that can degrade performance. These are usually small
               | individually (lets say, low double digit MS) within the
               | variance noise), but add up over time, and by the time
               | the degradation is really measurable, its complicated
               | tracking down the root cause. Hopefully I described that
               | in a way that makes sense?
               | 
               | Any suggestions welcome.
               | 
               | (Edit: downvoted too much and replies are throttled
               | again) ----@lostdog Thanks for the detail! will
               | definitely take this to eng team for process discussion.
        
               | lostdog wrote:
               | I work in an area where high variance is very expected
               | and unavoidable. Here's what we do:
               | 
               | In your PR, you link to the tool showing the performance
               | diff of your PR. The tool shows the absolute and relative
               | differences of performance from the base version of code.
               | It also tracks the variance of each metric over time, so
               | it can kind of guess which metrics have degraded, though
               | this doesn't work consistently. The tool tries to
               | highlight the likely degraded metrics so the engineer can
               | better understand what went wrong.
               | 
               | If the metrics are better, great! Merge it! If they are
               | worse, the key is to discuss them (quickly in Slack), and
               | decide if they are just from the variance, a necessary
               | performance degradation, or a problem in the code.
               | Typically it's straightforward: the decreased metrics
               | either are unrelated to the change or they are worth
               | looking into.
               | 
               | The key here is not to make the system too rigid. Good
               | code changes cannot be slowed down. Performance issues
               | need to be caught. The approvers need to be fast, and to
               | mostly trust the engineers to care enough to notice and
               | fix the issues themselves.
               | 
               | We also check the performance diffs weekly to catch
               | hidden regressions.
               | 
               | IF YOUR ORGANIZATION DOES NOT VALUE AND REWARD
               | PERFORMANCE IMPROVEMENTS, NONE OF THIS WILL WORK. Your
               | engineers will see the real incentive system, and resist
               | performance improvements. Personally, I don't believe
               | that Atlassian cares at all about performance, otherwise
               | it never would have gotten this bad. Engineers _love_
               | making things faster, and if they 've stopped optimizing
               | performance it's usually because the company discourages
               | it.
        
               | AnHonestComment wrote:
               | Your post comes across as patronizing and ignoring the
               | feedback to your company.
               | 
               | I think less of your company for these posts.
        
               | lostdog wrote:
               | Dear Esteemed Colleague at Atlassian,
               | 
               | I also use Confluence and JIRA regularly, and can confirm
               | that they are the slowest most terrible software that I
               | use on a regular basis. Every single page load and
               | transition is slow and terrible.
               | 
               | Asking "which one is the highest priority" is like asking
               | which body part I'd least prefer you amputate. The answer
               | is: please don't amputate any of them.
               | 
               | It's as if I asked you to dig out a hole for pouring
               | foundation for a house. The answer to "which shovelfull
               | of dirt has the highest priority" is all of them. Just
               | start shoveling. It's not done until you've dug the
               | entire hole.
               | 
               | It's like the exterminator asking which specific
               | cockroach is bothering me the most. (It's Andy. Andy the
               | cockroach is the most annoying one, so please deal with
               | her first).
               | 
               | What I, and many many other commenters, are trying to
               | tell you is that the entire product is slow and terrible
               | (not your fault. I'm guessing you're new and just trying
               | to improve things, and I hope you succeed!). If it were a
               | building, I'd call it a teardown. If it were a car, I'd
               | call it totaled.
               | 
               | It doesn't matter what page or interaction you start
               | with. Just start shoveling.
        
               | confluence_perf wrote:
               | Hi lostdog,
               | 
               | Thanks for the understanding! Indeed I haven't been at
               | Atlassian that long, but that's not a good excuse: it's
               | my problem to own.
               | 
               | I appreciate the reinforcement of "fix everything", and I
               | assure you we're trying our best to do so. As a PM it is
               | my natural instinct (and literal job) to prioritize, so
               | I'm always looking for more details to do so.
               | 
               | I can understand that my request for details can imply
               | that I'm either not listening or not believing the
               | feedback, but that is not the case -> I do understand
               | everything is slow and needs fixing.
        
               | throw_jirauser wrote:
               | This is a throwaway since I use Jira/Confluence at work
               | and am not authorized to officially speak on their
               | behalf.
               | 
               | We are actively looking for other solutions outside of
               | Atlassian, specifically because the demands to switch to
               | your cloud offerings. We simply do not trust your cloud.
               | 
               | We also have a higher compliance requirement, since we
               | can have potential snippets of production data. Our
               | Jira/Confluence systems are highly isolated inside a high
               | compliance solution. We can verify and prove that these
               | machines do not leak.
               | 
               | The Atlassian cloud is completely unacceptable in every
               | way possible. And going from $1200ish year to $20000 per
               | year with data center is laughably horrendous - for the
               | same exact features.
               | 
               | Unless Atlassian changes its direction, your software is
               | that of the walking dead. We have a absolute hard
               | timelimit of 2024, but in reality, 2022. We'd like to
               | still use it and pay you appropriately, but we're not
               | about to compromise our data security handling procedures
               | so you can funnel more people into a cloud service... And
               | judging by the comments here, is pretty damn terrible.
        
           | pacamara619 wrote:
           | Is this bait?
        
           | tbodt wrote:
           | I would say the right place for the performance team to apply
           | resources is looking for bugs or missed optimizations that
           | affect everything or nearly everything on the site.
           | Everything is uniformly slow, so there must be a lot of this.
        
           | jiggawatts wrote:
           | The topic of poor Jira performance came up yesterday, and I
           | did some quick benchmarking of Jira cloud using the best-case
           | scenario for performance: A tiny amount of data, no complex
           | permissions, a commonly used form, no web proxy, no plugins,
           | same geo region as the servers (Sydney), gigabit fibre
           | internet(!), etc...
           | 
           | I spun up a free-tier account and created an empty issue. No
           | data. No history. Nothing in any form fields. As blank as
           | possible.
           | 
           | The _only_ positive aspect is that most of the traffic is
           | coming from a CDN that enables: Gzip, IPv6, HTTP /2, AES-GCM,
           | and TLS 1.3. That's the basics taken care of.
           | 
           | Despite this, reloading the page with a warm cache took a
           | whopping 5.5 seconds. There's an animated progress bar for
           | the empty form!
           | 
           | This required 1.2 MB of uncacheable content to be
           | transferred.
           | 
           | With the cache disabled (or cold), a total of 27.5 MB across
           | 151 files taking 33 seconds is required to display the page.
           | This takes over 5 MB of network traffic after compression.
           | (Note that some corporate web proxies strip compression, so
           | you can't rely on it working!)
           | 
           | For reference, it takes 1.6 seconds on the same computer to
           | start Excel, and 8 seconds to load Visual Studio 2019
           | (including opening a project). That's _four times faster_
           | than opening an issue ticket with a cold cache!
           | 
           | Meanwhile, the total text displayed on the screen is less
           | than 1 KB, which means that the page has transfer-to-content
           | efficiency ratio exceeding 1000-to-1. This isn't the animated
           | menu of a computer game, it's a _web form!_
           | 
           | To render the page, a total of 4.35 seconds of CPU time was
           | required on a gaming desktop PC to with a 3.80 GHz CPU.
           | Having 6-cores doesn't seem to help performance, so don't
           | assume upcoming higher-core CPUs will help in any way.
           | 
           | A developer on an ultraportable laptop running on battery
           | over a WiFi link with a bad corporate proxy server in a
           | different geo-region would likely get a much worse
           | experience. Typically they might get as little as 1.5 GHz and
           | 20 Mbps effective bandwidth, so I can see why people are
           | complaining that Jira page loads are taking 10+ seconds!
           | 
           | In perfectly normal circumstances your customers are likely
           | seeing load times approaching a solid minute.
           | 
           | PS: I do development, and I've avoided Atlassian products
           | primarily because there's been a consistent theme to all
           | discussions related to Atlassian, especially Jira: It's slow.
           | 
           | Stop asking your customers if they're running plugins, or
           | what configuration they're using. Start asking yourself what
           | you've done wrong, terribly, _terribly_ wrong.
        
             | alexott wrote:
             | And if you are on battery only, with Wi-Fi over tethering,
             | try to find relevant issues to solve problems of your
             | customer... Or trying to file new Jira on the same setup...
             | It's so painful
        
           | plorkyeran wrote:
           | Literally everything? I don't think I could give an example
           | of something which _isn 't_ frustratingly slow in Jira. It
           | doesn't need targeted fixes to specific things; if I
           | successfully made a list of the ten biggest offenders and
           | they were all magically fixed tomorrow I don't think it'd
           | appreciably change the experience of using Jira because the
           | next 90 would still be awful.
           | 
           | When faced with long-tail performance problems, it's often
           | better to target the things which are easier to fix rather
           | than the highest impact fixes. Making 20 relatively low-
           | impact things faster can easily be better than improving 10
           | individually high impact things.
        
             | pluc wrote:
             | It seriously makes you wonder whether they even use it
             | internally, because not acknowledging or fixing those
             | issues while pretending you have a fast system doesn't make
             | sense.
        
               | marmaduke wrote:
               | Yep, and from a demo of YouTrack (from JetBrains), I got
               | the opposite impression: it's streamlined just the way a
               | developer would want, keyboard shortcuts and all.
        
               | dmitriid wrote:
               | Youtrack still somtimes comes up some very weird
               | shortcuts :)
               | https://youtrack.jetbrains.com/issue/JT-19706
        
           | keithnz wrote:
           | this "trying to show concern" is just fake. Atlassian have a
           | ticket tracking system for their problems. They just ignore
           | so much of the big hard problems, they close tickets with
           | hundreds and hundreds of people on it explaining a multitude
           | of core problems. Coming on HN is just trying to spin it for
           | PR purposes is just not going to work, thread after thread on
           | HN just shows that many many people have been BURNT by
           | Atlassian products. However, I will say, Confluence has
           | improved, but so many things still suck about it, including
           | it being sluggish, and a search that seems really brain dead.
        
           | core-questions wrote:
           | > Would you be willing to share more specifics,
           | 
           | You've got to be kidding me. Have you used your own product?
           | Going from my carefully tuned Server install to cloud Jira or
           | Confluence is a night-and-day difference. The Cloud product
           | is virtually unusable in comparison for any heavy Jira user.
           | 
           | You don't need "specifics", you need your performance
           | engineering team to literally take over the entire company
           | for 6 months to a year. No new features - nobody fucking
           | needs them, the features 99% of your users use have been in
           | the product for 5+ years already. Whatever you're PM'ing,
           | cancel it, it's a waste of time in comparison to making the
           | product not suck. The biggest source of losing users to some
           | other product is going to be the sheer pain of continuing to
           | use Atlassian....
           | 
           | Just make it usable. Halve the number of requests. Cache more
           | things client-side. Do more server-side pre-processing so
           | that a round-trip is not needed when I click on a menu.
           | 
           | I'm not looking forward to when I am forced to migrate my
           | users to a more expensive and less performant experience. I
           | and hundreds of thousands of other administrators will be
           | experiencing months of user complaints because of the forced
           | migration as it is; this is Atlassian's real chance to make
           | it suck less in time.
        
           | runlevel1 wrote:
           | Navigation is sluggish across the board in both Confluence
           | and Jira.
           | 
           | Not just the Cloud service, the self-hosted versions are also
           | painfully slow no matter what resources you throw at them.
           | 
           | That makes the other UX issues worse because the feedback
           | loop has so much lag.
        
             | ratww wrote:
             | _> Navigation is sluggish across the board in both
             | Confluence and Jira._
             | 
             | Also in Bitbucket. It used to be super fast, but recent
             | changes made it very slow.
             | 
             | My team loves the integration with JIRA but we're
             | considering going to Github because of the slowness.
        
             | lukeschlather wrote:
             | I know Jira has always had scaling issues. I used to work
             | for a very large company in the early 2010s that had, I
             | think 5 separate Jira instances. But they were I think
             | dealing with on the order of ten thousand daily active
             | users per instance.
        
             | coredog64 wrote:
             | Good news about the self-hosted version being slow: Pretty
             | soon you won't be able to run self-hosted Atlassian
             | products.
        
               | runlevel1 wrote:
               | Their Data Center version will still be available.[1]
               | That's what we're currently on.
               | 
               | Data Center includes a few performance-related features
               | like being able to run multiple frontends. I think we're
               | running 4 instances right now. It's still really slow,
               | even when nobody else is using it.
               | 
               | [1]: https://www.atlassian.com/blog/jira-software/server-
               | vs-data-...
        
         | sharken wrote:
         | I think future customers would like to know what kind of
         | performance to expect, so a write-up sounds like a great idea.
         | 
         | It sounds like a bad business decision to not support open
         | discussion on performance, it makes me think that they have
         | something to hide.
        
         | dreamcompiler wrote:
         | I'm very curious about where the slowdown is coming from. Is it
         | mostly JS on the client or Java on the server? When I ran my
         | own Confluence server on a Digital Ocean VM, it was slow but
         | not unbearable. I assumed it was Tomcat's fault* or the fact
         | that I wasn't using a "real" database on the backend (a
         | configuration Atlassian frowns upon).
         | 
         | *Confluence is built on Tomcat. Don't know if this is also true
         | for Jira.
         | 
         | Now that my Confluence server is on Atlassian's cloud, it seems
         | much slower still. So I have to assume it's not client-side JS
         | because that hasn't changed much; there's some kind of resource
         | starvation going on with Atlassian's servers.
        
       | macinjosh wrote:
       | JIRA IS SLOW.
       | 
       | sue me.
        
       | acdha wrote:
       | This is not surprising: anyone who's used Atlassian products
       | knows that quality has been job number 97 for years. That doesn't
       | happen by accident - someone's made the decision that they'll
       | make sales anyway and cut the QA budget.
       | 
       | One of the most obvious examples: they have multiple WYSIWYG
       | editor implementations which aren't compatible. When you format
       | something in Jira it'll look fine in the preview and then render
       | differently on reload. It's been like that for years, nobody
       | cares.
        
         | meibo wrote:
         | This is even worse than the crazy slowness.
         | 
         | I spend 10 minutes to make a detailed bug report, just to have
         | it fall apart after submitting. How does that happen in a
         | software made to show bug reports?
         | 
         | Just use standard markdown instead of your own bullcrap
         | formatting that doesn't ever seem to work.
        
         | redisman wrote:
         | That doesn't sound like a QA issue. Rather they have too many
         | competing departments that are reimplementing the same things
        
           | acdha wrote:
           | That's the cause. Normally that'd be caught in testing -- the
           | same input producing different outputs isn't hard to test --
           | and it's commonly reported by users. Maybe they each think
           | the other team should fix it but who cares: from the user's
           | perspective it's broken.
        
       | sidlls wrote:
       | I have yet to use a JIRA or Confluence system that isn't almost
       | insufferably slow while also suffering from a terrible UX. They
       | seem to be the IBM of project management and documentation in the
       | software world, though.
       | 
       | Also for users not acting in the capacity of a representative for
       | their employers who purchase or install JIRA/Confluence it is
       | perfectly fine to discuss performance issues such as the above.
       | The law isn't as dumb as some seem to think.
        
       | floatingatoll wrote:
       | I'm enjoying the irony of willful TOS breaking here, but y'all
       | probably shouldn't be openly declaring yourselves as subject to
       | enforcement actions under accounts with names attached to them,
       | even if Atlassian's own guidance (see elsethread) suggests they
       | don't care about your comments.
        
       | cosmotic wrote:
       | Most if not all users of these applications aren't even in the
       | position to review or accept the terms. When I was subjected to
       | the terms dialog, I asked our legal department what I should do
       | since obviously I am not an agent of the company. They said 'just
       | click accept'. Unbelievable.
        
       | chrisandchris wrote:
       | How bad has your performance to be until you explicitly prohibit
       | people using your service talking about the performance issue?
       | 
       | I'll wait until they discontinue the service for a customer
       | because he did not respect this ToS section.
        
       | lexicality wrote:
       | I'm more concerned about 3.3(c) "[you will not] use the Cloud
       | Products for the benefit of any third party".
       | 
       | Surely if I'm tracking bugs in Jira, that is to the benefit of my
       | users?
       | 
       | What if I am using post-it notes to keep track of a client's
       | request and they demand I use Jira instead because I keep losing
       | the notes. It's extremely to their benefit that I use the cloud
       | services...
        
       | confluence_perf wrote:
       | This is certainly a strange thing to have in there, as we've had
       | public discussions about performance before and I assume no one's
       | accused those customers of violating the ToS.
       | 
       | I'll see if I can find someone inside Atlassian to talk about
       | this part of the ToS
       | 
       | (edit: Looks like other users have found similar clauses in other
       | companies, so it seems like it might be standard legalese. Will
       | still see if I can find out more)
        
         | [deleted]
        
       | snvzz wrote:
       | We've established Jira/Confluence/etc. are crap.
       | 
       | Now, let's talk alternatives.
        
         | rexelhoff wrote:
         | Why is this not higher?
         | 
         | Every atlassian thread should include a sticky "Go here for an
         | alternative" post
        
       | paulryanrogers wrote:
       | The most innocent explanation is preventing benchmarks against
       | similar tools which could be unflattering. Though the only other
       | company that resorts to such insecure measures is Oracle.
        
         | cortesoft wrote:
         | The comment right next to yours is a link to cloudflare's TOS
         | with the exact same provision. It is not just Atlassian and
         | Oracle.
        
         | detaro wrote:
         | way more companies do that than just those two.
        
       | [deleted]
        
       | dekhn wrote:
       | The obvious way to deal with this is to write a passive github
       | repo that contains a performance test harness and test plugins
       | that run against Atlassian servers. Make it trivial for any
       | customer to download and use (I'm going to guess that Atlassian
       | has some license terms about API use that could make this
       | tricky).
       | 
       | Simultaneously, tweet results from anonymous twitter, post to
       | hacker news, get picked up in the trade press, make it big enough
       | that it's embarassing for atlassian when a bunch of customers
       | measure latency and realize the product is crap.
        
         | jiggawatts wrote:
         | I've had similar ideas for shedding some much-need light on
         | other similar performance pain points that large corporations
         | are sweeping under the rug via legal means.
        
       | papafox wrote:
       | I don't read section 3.3(i) as preventing criticism of
       | performance, rather it prohibits the release performance
       | benchmarks without further permission from Atlassian.
       | 
       | Having been on the receiving end of competitors running
       | 'benchmarks' on a service I worked on, and the trumpeting the
       | very contrived and out of context figures, I can understand why
       | Atlassian is trying to prevent it from happening to them.
       | 
       | Pity that it probably won't work.
        
         | vsskanth wrote:
         | If your competition is misrepresenting your product in
         | benchmarks can't you just sue for defamation ?
         | 
         | There's also the option of counter articles
        
       | judge2020 wrote:
       | Cloudflare has this as well:
       | 
       | >you will not and you have no right to: ... (f) perform or
       | publish any benchmark tests or analyses relating to the Cloud
       | Services without Cloudflare's written consent;
       | 
       | https://www.cloudflare.com/terms/#react-modal:~:text=(f)%20p...
        
         | MattGaiser wrote:
         | Are these terms really that nefarious or just a way to
         | terminate some customer who decides to load test your system
         | and ends up bringing it down? Legal documents are typically
         | written to be as broad as possible.
         | 
         | Perhaps it should be more narrowly written, but prohibiting
         | certain kinds of testing without permission is reasonable.
        
         | emilfihlman wrote:
         | This is just awful
        
           | mattm wrote:
           | As a software engineer who has been woken up in the middle of
           | the night while oncall because some random user wanted to run
           | their own performance tests against our system, I can
           | completely understand why companies want to prevent this from
           | happening without their awareness.
        
             | jiggawatts wrote:
             | a) That's not the reason.
             | 
             | b) Rate limiting is practically mandatory in an era of
             | multi-gigabit client network connections on even mobile
             | phones.
             | 
             | Remember: It's not a DDoS attack if it's _one user_.
        
               | smarx007 wrote:
               | Well, an "average" HN user may use something like a load
               | testing tool such as https://github.com/loadimpact/k6 or
               | at least rent a few VMs with 10Gbps links for a few hours
               | and use wrk/ab so I would not be so sure about (a).
               | 
               | Edit: oh, and then they might dust off that Kali Linux VM
               | lying around somewhere...
        
         | saagarjha wrote:
         | It's amazing what legal gets away with at most companies and
         | how little actual engineers and management gets to see of this
         | part of their company. I am sure any self-respecting Cloudflare
         | engineer would be horrified to see that this is in their ToS
         | and yet it exists.
        
       | social_quotient wrote:
       | So the language highlighted here is:
       | 
       | (i) publicly disseminate information regarding the performance of
       | the Cloud Products;
       | 
       | But can we take a minute to talk about the combination of these
       | two :
       | 
       | (h) use the Cloud Products for competitive analysis or to build
       | competitive products;
       | 
       | (j) encourage or assist any third party to do any of the
       | foregoing.
       | 
       | Does this mean as a jira user I can't help build a jira
       | competitor for a client of mine ( we are a tech agency). If this
       | is the case I would really have a hard time using jira and being
       | compliant. After all, who is the arbiter on what jira is
       | exactly?But does this also mean people can't do reviews on the
       | platform as means to compare to other platforms? I'm a bit
       | speechless here, it's a wtf sort of thing.
        
         | madamelic wrote:
         | Not just Jira, but also all Atlassian products which includes
         | HipChat, Trello, OpsGenie and a load of other products.
        
         | hertzrat wrote:
         | Imagine if products prior to the 90s or outside software had
         | these sorts of "agreements." Every store in a mall would have
         | them and every piece of fruit in a grocery store would require
         | you to agree to arbitration clauses and privacy policies and
         | non disclosure and non competition. Consumer reports and class
         | action suits would not exist, and nobody would really be
         | allowed to talk about it because of the NDAs. Automated facial
         | and voice recognition in smart home devices could sell data to
         | companies to enforce it. The news would not be able to talk
         | about it. It would be a good setup for a dystopian movie, no?
        
       | jjuhl wrote:
       | "Atlassian Cloud ToS section 3.3(I) prohibits discussing
       | performance issues" - Their ToS may prohibit it, but that is in
       | no way going to stop me from doing it - I don't give a shit about
       | some document they write. Atlassian products suck hard and their
       | performance characteristics are horrible. I hate being forced to
       | use their crap at work.
        
         | bluedino wrote:
         | Most enterprise software prohibits discussing benchmarking etc
        
         | lostdog wrote:
         | Maybe you'll get lucky and they'll ban you from using their
         | products!
        
         | Hamuko wrote:
         | The problem with Atlassian's Terms of Service is that most of
         | their end-users are not paying for the software and do not
         | really care if they violate an agreement they were either
         | forced to make or which someone made on their behalf.
        
           | dylan604 wrote:
           | They will suddenly care when Atlassian locks them out of
           | being able to access anything. Then, we'll see posts on
           | Twitter or here or elsewhere about some user crying about not
           | getting access to "their" stuff on a 3rd party's site.
        
             | cronix wrote:
             | Or conversely, some might actually be thankful in the long
             | run for being forced to bite the bullet and to find an
             | alternative which raises productivity/efficiency and as a
             | byproduct, usually a happier work environment.
        
             | saagarjha wrote:
             | The thing we've constructed of users not owning their
             | content that they post on platforms and then holding the
             | ability to lock them out of access to it arbitrarily is
             | really one of the worst things that has happened to the
             | web.
        
           | jerf wrote:
           | I don't think it applies to us. Our employers can sign
           | whatever they like and constrain us from speaking in official
           | company-related capacities, but we're no more bound to that
           | as individuals then we're bound to anything else our
           | companies sign as individuals. As individuals, we're not in a
           | relationship with Atlassian at all.
        
           | jjuhl wrote:
           | Be honest; noone cares about (or reads) ToS agreements.
        
             | Hamuko wrote:
             | No one cares to read any Terms of Service agreements, but I
             | think Atlassian's products face another tier: not caring to
             | adhere to them at all.
             | 
             | I'd personally find it pretty funny to go up to my boss and
             | tell them I can't read my Jira tickets because Atlassian
             | banned my account. Or would they ban the entire
             | organization instead? Either way, hilarious.
        
       | unixhero wrote:
       | Atlassian Cloud customer here. Large enterprise. The Jira and
       | Confluence cloud products are slow as fuuuuuck.
        
         | confluence_perf wrote:
         | Sorry to hear it's been a frustrating experience. I'm a PM for
         | Confluence Cloud and we're always trying to make it better.
         | Would you be willing to share more specifics, such as: - Pages
         | with content X are the slowest - Trying to do A/B/C is
         | annoyingly slow - etc ?
         | 
         | (edit: looks like HN is severely limiting my reply rate so
         | apologies for delays)
        
           | burnthrow wrote:
           | Come on, everything is slow and you know it.
        
             | confluence_perf wrote:
             | We have certainly heard from some customers that agree that
             | 'everything' is slow, but we've also heard from other
             | customers saying they have no problems.
             | 
             | We would love to fix "everything", and we have some longer
             | term projects focused on this -> However, "everything"
             | fixes seem to be a more incremental boost and also take
             | longer time to complete.
             | 
             | If you have any feedback about "specific" items that are
             | the most frustrating, we'd love to hear about those ->
             | targeted fixes for specific can be much faster, much
             | greater gains, and usually offer better user experience
             | gain/engineering time returns.
             | 
             | If not, I can only say that we are definitely working on
             | making 'everything faster'
             | 
             | (edit: trying to reply but looks like HN is limiting my
             | reply rate)
             | 
             | (edit: maybe I can post my replies here and hopefully
             | they'll get read)
             | 
             | ------ @rusticpenn - This is definitely possible that 'some
             | users are just used to it'. But we also see a very wide
             | variance in individual customers' performance numbers (ie.
             | some instances have consistently faster performance than
             | other instances), and even within individual instances
             | variance amongst users (some users have consistently faster
             | experience than other users on the same instance) -> we're
             | trying what we can to narrow down the causes in this
             | variance.
             | 
             | Hearing from "users with slow experiences" is simply one of
             | the ways we're trying to track this down, but it helps if
             | users are willing to provide more info.
             | 
             | --------
             | 
             | @ratww - thank you for the suggestion! We have some amount
             | of data that helps us see what might be different between
             | instances, but haven't gone out of our way to 'interview a
             | fast customer', I'll bring this up with the team to see.
             | 
             | The two biggest factors I think we've seen: slow machines
             | can contribute (but not a necessity), and large pages
             | (especially with large tables, or large number of tables)
             | can contribute.
        
               | ratww wrote:
               | _> but we 've also heard from other customers saying they
               | have no problems_
               | 
               | Can I suggest following up with those customers to see if
               | and how they're using the product, what's their computer
               | configuration, if there's anything special about them?
        
               | mgkimsal wrote:
               | There's a bit of verbal sleight-of-hand going on -
               | probably not intentionally.
               | 
               | "This is very slow"
               | 
               | "We have no problems"
               | 
               | These aren't addressing the same things, really (unless
               | the OP was translating "we're happy with the speed of the
               | entire system" as "we have no problems").
               | 
               | Are the people reporting "no problems" actual end users.
               | People I know who've become acclimated to Jira that I
               | know would happily respond "we have no problems" while
               | the people below them who have to use Jira 10x more often
               | (multiple times per hour, vs a daily look at progress,
               | for example) would happily say "this is slow as molasses
               | (and that's a problem)".
        
               | acdha wrote:
               | > We have certainly heard from some customers that agree
               | that 'everything' is slow, but we've also heard from
               | other customers saying they have no problems.
               | 
               | What do your metrics show? I instrument my web sites so I
               | know how long every operation - server responses, front-
               | end JS changes, etc. - takes and can guide my development
               | accordingly. You have a much larger budget and could be
               | answering this question with hard data.
               | 
               | I'll second the "everything" responses. Request Tracker
               | on 1990s hardware was considerably faster than Jira is
               | today - and better for serious use, too.
        
               | confluence_perf wrote:
               | Hi acdha,
               | 
               | We have metrics, but of course as with many such things
               | you always want more insights than the amount of data
               | you're collecting (so we're always trying to grow this as
               | appropriate).
               | 
               | This data is what led to the above (added edit trying to
               | reply to @rusticpenn) saying we can see that "some
               | instances are slower than others", and "some users are
               | slower than others". I can't share those numbers of
               | course though.
               | 
               | However, privacy reasons does prevent us from collecting
               | too much data, so differentiating why individual users
               | might have different experiences (even when other known
               | factors are similar/identical) is difficult.
               | 
               | Also I'd be happy to take any suggestions you have about
               | what to look at back to my engineering team, if you're
               | willing to share other ideas. I know we're tracking
               | several of the ones you mention but more options is
               | always better.
        
               | rusticpenn wrote:
               | > but we've also heard from other customers saying they
               | have no problems
               | 
               | I am sorry, there are probably customers who are used to
               | the tools. Maybe they don't pass the Mom test. That point
               | comes out as unnecessary defense here.
        
               | dylan604 wrote:
               | I think these maybe the same people calling Trump to say
               | that he's a great president. Both claims are equally as
               | unbelievable.
        
           | hchz wrote:
           | "My boss told me to generate a bunch of JIRAs in reaction to
           | the recent _accurate_ discussions on HN of how poor our
           | performance is, so I need specific dit-dot issues to buff our
           | team metrics rather than address the _cause_ of the issues,
           | which is a political non-starter "
        
             | saagarjha wrote:
             | A more charitable interpretation might be "my boss won't
             | let me fix things unless I have specific comments about
             | problems from people who use the software".
        
               | jiggawatts wrote:
               | Translation: His boss is an idiot and their lunch is
               | going to be eaten by a company where management gives a
               | shit about product quality.
        
           | tw25613937 wrote:
           | Not trying to be snarky, but... do you use your product?
        
             | grzm wrote:
             | These are questions I have as well. That said, creating a
             | throwaway and prefacing a comment with "Not trying to be
             | snarky" shouldn't be an excuse for not taking the time to
             | couch questions in a way you're more confident aren't going
             | to be interpreted in a negative way. This isn't directed
             | just at you: I see this behavior all too often: people
             | using throwaways as an excuse to not take the time to
             | express things in a manner that doesn't need to apologize
             | for itself.
             | 
             | Atlassian employees use their products, and tips and tricks
             | they may have to use them effectively, or make the
             | experience of using JIRA or Confluence, or their other
             | tools more enjoyable and useable would be great to know!
        
               | dylan604 wrote:
               | If you need tips or TRICKS to make a product useable, you
               | have a BAD product.
        
               | smarx007 wrote:
               | Well, you certainly need "tips and tricks" to make Arch
               | Linux fully usable. Every powerful tool needs to be
               | adjusted to its use (Github is full of dotfiles and macOS
               | bootstrap repos). Doing so is a sign of professionalism
               | (craftsmanship).
        
               | grzm wrote:
               | I agree, and I also know that you have to deal with the
               | world as it is right now even while you work to make it
               | better.
               | 
               | If you have to use Jira or Confluence at work, you
               | probably want to know how to make that as useful as
               | possible. If you're working at Atlassian, you probably
               | want to make your customer's experience as enjoyable as
               | possible as soon as possible. Ideally you have a great
               | product and great documentation and all happy customers.
               | If that's not the case, you have an opportunity to work
               | on a number of fronts, including improving documentation
               | and the product, _and help current customers with the
               | product as it is._ You can and should be doing all of
               | these things.
               | 
               | Piling on doesn't help anything.
        
               | hchz wrote:
               | Some people don't want to use it at all, and don't care
               | for the situation that C-suite everywhere buys
               | Atlassian's trash.
               | 
               | They don't want minor improvements to help it limp along,
               | they want to vent and complain about it.
               | 
               | I think it's impossible that Atlassian evolves into a
               | good product company, but it's entirely possible that my
               | next CTO googled for opinions on the product, found a few
               | discussions on HN with a combined 3,000 complaints about
               | what a garbage fire it is, and went with Clubhouse.
        
               | grzm wrote:
               | And the thread from yesterday
               | (https://news.ycombinator.com/item?id=25590846 for a site
               | named https://whyjirasucks.com) or any of the other many
               | rants on Altassian around the web don't suffice? I hardly
               | think that this thread is going to be the tipping point.
               | It's not like this is news.
               | 
               | Given your opinions regarding Atlassian, what would you
               | think of your next CTO if they were even considering
               | Atlassian? Is that someone you'd want to work for?
        
           | eitland wrote:
           | Not GP but I also have the same feeling across Atlassian
           | products:
           | 
           | it is more like a general feeling _all the time_ for many of
           | us.
           | 
           | I'm using the latest Firefox on Windows, a developer laptop
           | with 32GB memory that was brand new this spring as well as
           | 500/500 fiber.
        
             | confluence_perf wrote:
             | Is this also on Cloud products?
             | 
             | Right now on holiday (through Tuesday), but would like
             | learn more. I see your contact info in your profile, if you
             | give consent I'd like send you an email on Wednesday with
             | some followup questions (the first question is "what is the
             | URL for your cloud instance" so I don't want to be asking
             | for it on a public forum)
             | 
             | (edit: typo)
        
               | crooked-v wrote:
               | > Is this also on Cloud products?
               | 
               | On my company's cloud instance of Jira, it's a minimum of
               | a 2-3 second delay to do _anything_. Edit, wait a few
               | seconds, save, wait a few seconds, change a field, wait a
               | few seconds... and God help you if you need to reload the
               | page because something got stuck.
        
           | snypher wrote:
           | As a product manager, are you allowed to discuss performance
           | and benchmarks without violation of your contract? Or, is it
           | just customers that are prohibited from this?
        
         | akudha wrote:
         | Same, they're slow as a snail high on weed.
         | 
         | I also got no response (or even an acknowledgement) for the
         | feedback I gave. Like most people here, I too am forced to use
         | it at work.
        
       | x87678r wrote:
       | I thought Jira was slow then our team started using Service Now.
       | Holy ____that is junk.
        
         | Macha wrote:
         | Service Now has an incredibly clunky UI in terms of actually
         | finding what you want, but at least in our JIRA Datacenter vs
         | Service Now (it's under a service now domain, so I assume some
         | sort of cloud setup) setups with several hundred thousand
         | issues in each, Service Now is actually pretty fast in the
         | sense of "You click an action and it does it's intended
         | purpose", where JIRA falls down. It's slow only in the "You
         | need 5 actions to get to the thing you want" sense.
        
       | bionade24 wrote:
       | Just running atlassian.com on Google Pagespeed shows why they're
       | doing that: 12 from 100. That is really examplaric for other
       | Atlassian products.
       | 
       | And it's really not hard to go over 95 on GPagespeed, just don't
       | use JS fucking everywhere.
        
       | twblalock wrote:
       | Every Atlassian product I've used has had scalability problems.
       | Instead of trying to hide them, they should work on fixing them.
        
         | gonzo41 wrote:
         | If only there was a product that helped size up work and let
         | teams manage a backlog of features. :D
         | 
         | But I agree totally.
        
       | koreanguy wrote:
       | Atlassian Cloud is extremely slow, do not use it
        
       | thewebcount wrote:
       | I don't want to sound mean, but before discussing performance
       | issues, can we talk about how completely unusable all of
       | Atlassian's products are? I mean this sincerely. I feel bad
       | saying it because I know there are lots of people probably on
       | this site that worked on them. But I literally often have no idea
       | how to proceed when using these products. That's something that
       | hasn't happened to me since before I was a teenager.
       | 
       | As 2 random examples, I've used both Confluence and Crucible in
       | the last month. In Confluence, when I log in I see everything
       | that anyone in my (1,000+ people) organization worked on most
       | recently. People I've never even heard of show up as having
       | edited some random document that doesn't concern me. Meanwhile, I
       | can find no way to list all articles that I created. There's a
       | small list of things I edited or looked at in the last 30 days,
       | but no way to say, "just show me everything I created."
       | 
       | Meanwhile, in Crucible, I literally can't figure out how to do
       | anything. I'm reading through the changes to some source code and
       | adding comments, and after an hour of doing that, it still says
       | I've completed 0% of my review. WTF? And when I start my own code
       | review, every god damned time, it tells me, "You're about to
       | start a review with no reviewers." It then offers me 2 choices:
       | Abandon the review or cancel. I get what "abandon" does. What
       | does cancel do? Cancel the review? Cancel abandoning the review?
       | Why is there no button right there to add reviewers? That's what
       | I most want to do! (And there are no reviewers yet because it
       | literally has not yet given me the option to add them previously
       | in the work flow. WTF?)
       | 
       | You can talk about performance all you want. I won't bother until
       | the products actually perform some useful function. As of now, as
       | far as I can tell, they don't.
        
       | mostlyghostly wrote:
       | Simple English Translation: We cannot trust any publicly posted
       | claims about product performance, since they have effectively
       | been cherry-picked by the marketing / legal team.
       | 
       | Bad claims can be take down, thus the only remaining claims are
       | good ones.
       | 
       | Cool - can anyone provide a quick list of alternatives?
        
       | ec664 wrote:
       | IMO their terrible performance is the #1 reason not to use their
       | cloud services, and there's usually nothing you can do about it.
       | The less resources they use for each customers the higher their
       | margins.
        
       | FreezerburnV wrote:
       | The specific part of the ToS being referred to:
       | 
       | (i) publicly disseminate information regarding the performance of
       | the Cloud Products
        
       | josh_atlassian wrote:
       | Just so everyone is aware, this is Atlassian's stance on that
       | language taken from the internal guidance on the ToS. To be clear
       | I'm not defending this stance as I think it is flawed. But I
       | wanted you guys to know what Atlassians are told about it -
       | 
       | ------------------------------------------------------
       | 
       | Section 3.3: Benchmarking Can you explain Atlassian's stance on
       | Benchmarking?
       | 
       | Like many other software companies, Atlassian has this language
       | in its terms to protect users from flawed reviews and benchmarks.
       | By requiring reviewers to obtain Atlassian's consent before
       | publicly disseminating their results, Atlassian can confirm their
       | methodology (i.e. latest release, type of web browser, etc.) and
       | make sure they deliver verifiable performance results. Atlassian
       | is not trying to prevent any users from internally assessing the
       | performance of our products.
       | 
       | The language related to the public distribution of performance
       | information has been included in our customer agreement since
       | 2012.
       | 
       | Customers can obtain Atlassian's consent by filing a Support
       | ticket. The Support engineer will then need to bring in a PMM for
       | approval of the data/report.
       | 
       | ------------------------------------------------------
        
         | sand_castles wrote:
         | > Atlassian has this language in its terms to protect users
         | from flawed reviews and benchmarks.
         | 
         | The solution to lies is not to censor, but transparency.
         | 
         | Atlassian has all the resources in the world to answer any
         | external benchmarks done by third party.
         | 
         | If you can hire an army of lawyers, surely its possible to have
         | a full-time engineer running benchmarks.
        
         | ineedasername wrote:
         | That explains the stance but is not sufficient to justify it.
         | It gives Atlassian infinite power to stomp on any benchmark
         | that shows poor performance under a claim that it is flawed. It
         | is also irrelevant that it's been in your ToS since 2012:
         | Precedent or longevity do not make consumer-unfriendly
         | restrictions acceptable.
         | 
         | This is implicitly recognized by allowing _internal_
         | assessment: That assessment would be just as vulnerable to
         | flawed methodology and therefore flawed decision making on
         | products. If you were that concerned over such issues, you
         | could issue further restrictions on performance assessment that
         | limited such activity to be conducted only under Atlassian 's
         | close review or using your own mandated methodology. One reason
         | you probably don't do that is because potential buyers would
         | balk at those restrictions and either pass on your product or
         | responsibly engage in due diligence and perform their own
         | assessments regardless.
         | 
         | Further, the resources to do extensive internal assessment may
         | be lacking in many organizations, which means your provision to
         | allow internal testing is meaningless to many customers. As a
         | result, the prohibition against public disclosure thereby
         | deprives them of any way of obtaining objective external
         | analysis.
         | 
         | You could satisfy your concerns by requiring that public
         | disclosure be reviewed by Atlassian prior to publication. You
         | could require an option for Atlassian to comment on the results
         | with embedded notations without restricting publication itself.
         | That would still be heavy handed but at least allow a
         | reasonable amount of independent review of your performance.
        
           | sytse wrote:
           | Those suggestions in your last paragraph look very reasonable
           | to me. At GitLab we explicitly allow performance testing as
           | our ninth and final stewardship promise
           | https://about.gitlab.com/company/stewardship/#promises But I
           | recognize there is a trade-off and companies can reasonably
           | but the balance at different points.
        
             | jiggawatts wrote:
             | There is no tradeoff.
             | 
             | DeWitt clauses are corporate censorship and are 100% self-
             | serving.
             | 
             | There is zero benefit to consumers having no benchmarks at
             | all available for entire product categories.
             | 
             | There is an enormous benefit to corporations to be able to
             | silence critics with the threat of bankruptcy via lawsuit.
             | 
             | This is big corporations using the law to bully journalists
             | and citizens, nothing more.
        
         | tw25613937 wrote:
         | > protect users from flawed reviews
         | 
         | Perhaps California will address this problem by banning
         | "performance benchmarking platforms" from listing evaluated
         | products without an agreement from the vendor... [1]
         | 
         | [1] https://news.ycombinator.com/item?id=25601814
        
       | tzs wrote:
       | I have no idea if it applies in this case, but sometimes terms
       | like this are there because competitors have them.
       | 
       | You have product X, and your competitor Y publishes whitepapers
       | and ads comparing their performance to yours, showing yours is
       | terrible. You think they rigged the tests, and want to publish
       | your own X vs. Y comparison but Y's TOS prohibits it.
       | 
       | Once one major Y does this, many others follow suit as a
       | defensive measure against Y.
       | 
       | I seem to recall seeing some where X's TOS would try to limit the
       | prohibition to just that situation, such as saying that you could
       | only publish X vs. Y performance comparisons if either there were
       | no restrictions on others publishing Y performance comparisons or
       | Y granted X an exception to Y's prohibition.
        
       | random3 wrote:
       | If you start using Linear you won't need a benchmark to notice
       | the difference.
       | 
       | This is what happens when the "clueless" start "innovating". I've
       | had several conversations over the years with members of
       | Atlassian technical teams. They always wanted to work on
       | performance, but never allowed (priorities).
       | 
       | They are in "good" company (Oracle, China, etc.). What's
       | preventing anonymous performance benchmarks, though?
        
         | guitarbill wrote:
         | what do they work on? how many other features can you bolt onto
         | an already mediocre wiki? or is it just bugfixes, keeping up
         | with browser quirks, and useless UI "redesigns"/reskins?
        
           | maccard wrote:
           | As awful as confluence is, it's not really a mediocre wiki.
           | It's the best I've used. (Out of confluence, notion,
           | mediawiki and some god awful internal thing based on
           | wordpress). It's interface sucks, the performance sucks, the
           | editor sucks, search sucks, and it's still the best.
        
       | tyingq wrote:
       | Someone went to the trouble of making a semi-functional mockup to
       | demonstrate to Atlassian that the product didn't need to be as
       | big and slow: https://jira.ivorreic.com/
        
       | Simulacra wrote:
       | Jira was a disaster for our team, but the response we got as a
       | major government contractor from Atlassian was so bad that we
       | swore off all Atlassian products. When I would bring performance
       | issues to the technical support team it was like trying to
       | convince Apple there was a problem with the butterfly keyboard.
       | Like talking to a brick wall.
        
       | las3r wrote:
       | Both cloud and on prem user here. Both systems are slow, hardware
       | doesnt scale. Simply browsing issues easily takes 2 to 5 seconds
       | per click. Confluences is okay i guess, just the slowness is
       | annoying. That, and trying to to format text properly.
       | 
       | When youve worked with azure devops or github before, atlassian
       | tooling is really a blast from the past.
        
         | uncledave wrote:
         | Yep. I was a company official JIRA and Confluence on prem JVM
         | restarter, mail queue unclogger and schema unfucker for 5
         | years. When they moved it to Cloud it was the best day of my
         | life because it wasn't my problem any more even though the
         | performance was even worse.
        
       | ssddanbrown wrote:
       | I have not recently used confluence myself, but having created an
       | open source project in the same space I have seen a few users
       | migrate from confluence quoting speed as their main frustration &
       | reason for migrating away. Has been the case for a couple of
       | years so I'm surprised it has not been more of a focus for them.
        
       ___________________________________________________________________
       (page generated 2021-01-02 23:01 UTC)