[HN Gopher] Microsoft Office migration from Source Depot to Git
       ___________________________________________________________________
        
       Microsoft Office migration from Source Depot to Git
        
       Author : dshacker
       Score  : 288 points
       Date   : 2025-06-12 00:15 UTC (22 hours ago)
        
 (HTM) web link (danielsada.tech)
 (TXT) w3m dump (danielsada.tech)
        
       | smitty1e wrote:
       | > We spent months debugging line ending handling
       | 
       | "Gosh, that sounds like a right mother," said Unix.
        
       | pm90 wrote:
       | Its oddly fascinating that Microsoft has managed to survive for
       | so long with ancient/bad tools for software engineering. Almost
       | like "life finds a way" but for software dev. From the outside it
       | seems like they are doing better now after embracing OSS/generic
       | dev tools.
        
         | com2kid wrote:
         | At one point source depot was Toincredibly advanced, and there
         | are still features that it had that git doesn't. Directory
         | mapping being a stand out feature! Being able to only pull down
         | certain directories from a depot and also remap where they are
         | locally, and even have the same file be in multiple places.
         | Makes sharing dependencies across multiple projects really
         | easy, and a lot of complicated tooling around "monorepos"
         | wouldn't need to exist if git supported directory mapping.
         | 
         | (You can get 80% of the way there with symlinks but in my
         | experience they eventually break in git when too many different
         | platforms making commits)
         | 
         | Also at one point I maintained an obscenely advanced test tool
         | at MS, it pounded through millions of test cases across a slew
         | of CPU architectures, intermingling emulators and physical
         | machines that were connected to dev boxes hosting test code
         | over a network controlled USB switch. (See:
         | https://meanderingthoughts.hashnode.dev/how-microsoft-tested...
         | for more details!)
         | 
         | Microsoft had some of the first code coverage tools for C/C++,
         | spun out of a project from Microsoft Research.
         | 
         | Their debuggers are still some of the best in the world. NodeJS
         | debugging in 2025 is dog shit compared to C# debugging in 2005.
        
           | bsder wrote:
           | > git supported directory mapping.
           | 
           | Is this a "git" failure or a "Linux filesystems suck"
           | failure?
           | 
           | It seems like "Linux fileystems" are starting to creak under
           | several directions (Nix needing binary patching, atomic
           | desktops having poor deduplication, containers being unable
           | to do smart things with home directories or too many
           | overlays).
           | 
           | Would Linux simply sucking it up and adopting ZFS solve this
           | or am I missing something?
        
             | MobiusHorizons wrote:
             | How is that related? I don't think anyone would suggest
             | ntfs is a better fit for these applications. It worked
             | because it was a feature of the version control software,
             | not because of file system features.
        
             | yjftsjthsd-h wrote:
             | What would ZFS do for those issues? I guess maybe
             | deduplication, but otherwise I'm not thinking of anything
             | that you can't do with mount --bind and overlays (and I'm
             | not even sure ZFS would replace overlays)
        
               | bsder wrote:
               | Snapshots seems to be a cheap feature in ZFS but are
               | expensive everywhere else, for example.
               | 
               | OverlayFS has had performance issues on Linux for a while
               | (once you start composing a bunch of overlays, the
               | performance drops dramatically as well as you start
               | hitting limits on number of overlays).
        
               | adrian_b wrote:
               | Nowadays the ZFS advantage for snapshots is no longer
               | true.
               | 
               | Other file systems, e.g. the much faster XFS, have
               | equally efficient snapshots.
        
           | klank wrote:
           | Ok, but now tell me your real thoughts on sysgen. ;-)
        
           | o11c wrote:
           | As always, git's answer to the problem is "stop being afraid
           | of `git submodule`."
           | 
           | Cross-repo commits are not a problem as long as you
           | understand "it only counts as truly committed if the child
           | repo's commit is referenced from the parent repo".
        
             | xmprt wrote:
             | > it only counts as truly committed if the child repo's
             | commit is referenced from the parent repo
             | 
             | This is a big problem in my experience. Relying on
             | consumers of your dependency to upgrade their submodules
             | isn't realistic.
        
             | mickeyp wrote:
             | Git submodules are awful. Using subversion's own submodule
             | system should be mandatory for anyone claiming Git's
             | implementation is somehow worthwhile or good.
        
           | tikkabhuna wrote:
           | I never understood the value of directory mapping when we
           | used Perforce. It only seemed to add complexity when one team
           | checked out code in different hierarchies and then some
           | builds worked, some didn't. Git was wonderful for having a
           | simple layout.
        
             | senderista wrote:
             | You might feel differently if you worked on just a few
             | directories in a giant repo. Sparse client views were a
             | great feature of SD.
        
               | int_19h wrote:
               | I'm in exactly this situation with Perforce today, and I
               | still hate it. The same problem OP described applies -
               | you need to know which exact directories to check out to
               | build, run tests etc successfully. You end up with wikis
               | filled with obscure lists of mappings, many of them
               | outdated, some still working but including a lot of cruft
               | because people just copy it around. Sometimes the
               | required directories change over time and your existing
               | workspaces just stop working.
               | 
               | Git has sparse client views with VFS these days.
        
         | dangus wrote:
         | Let's not forget that Microsoft developed a lot of tools in the
         | first place, as in, they were one of the companies that created
         | things that didn't really exist before Microsoft created them.
         | 
         | Git isn't even very old, it came out in 2005. Microsoft Office
         | first came out in 1990. Of course Office wasn't using git.
        
           | dboreham wrote:
           | Some examples would be useful here. Not knocking MS tools in
           | general but are there any that were industry fists? Source
           | code control for example existed at least since SCCS which in
           | turn predates Microsoft itself.
        
             | pianoben wrote:
             | Of course that's only half the story - Microsoft invents
             | amazing things, and promptly fails to capitalize on them.
             | 
             | AJAX, that venerable piece of kit that enabled _every
             | dynamic web-app ever_ , was a Microsoft invention. It
             | didn't really take off, though, until Google made some maps
             | with it.
        
             | noen wrote:
             | Microsoft rarely did or does anything first. They are
             | typically second or third to the post and VC is no
             | different.
             | 
             | Most people don't know or realize that Git is where it is
             | because of Microsoft. About 1/2 of the TFS core team spun
             | out to a foundation where they spent several years doing
             | things like making submodules actually work, writing git-
             | lfs, and generally making git scale.
             | 
             | You can look for yourself at the libgit2 repo back in the
             | 2012-2015 timeframe. Nearly the whole thing was rewritten
             | by Microsoft employees as the earliest stages of moving the
             | company off source depot.
             | 
             | It was a really cool time that I'm still amazed to have
             | been a small part of.
        
           | lIl-IIIl wrote:
           | Office is a package including things like Word and Excel.
           | Word itself came out in 1984 for the first Macintosh. Windows
           | OS did not yet exist.
        
         | senderista wrote:
         | Google used Perforce for years and I think Piper still has
         | basically the same interface? So no, MSFT wasn't ridiculously
         | behind the times by using Source Depot for so long.
        
       | azhenley wrote:
       | I spent nearly a week of my Microsoft internship in 2016 adding
       | support for Source Depot to the automated code reviewer that I
       | was building
       | (https://austinhenley.com/blog/featurestheywanted.html) despite
       | having no idea what Source Depot was!
       | 
       | Quite a few devs were still using it even then. I wonder if
       | everything has been migrated to git yet.
        
         | sciencesama wrote:
         | Naah still a lot of stuff works on sd !! Those sd commands and
         | setting up sd gives me chills !!
        
         | hacker_homie wrote:
         | Most of the day to day is in git, now.
        
         | PretzelPirate wrote:
         | I miss CodeFlow everyday. It was such a great tool to use.
        
       | 3eb7988a1663 wrote:
       | We communicated the same information through multiple channels:
       | weekly emails, Teams, wiki docs, team presentations, and office
       | hours. The rule: if something was important, people heard it at
       | least 3 times through different mediums.
       | 
       | If only this were standard. Last week I received the only
       | notification that a bunch of internal systems were being deleted
       | in _two weeks_. No scream test, no archiving, just straight
       | deletion. Sucks to be you if you missed the email for any reason.
        
         | MBCook wrote:
         | No kidding. The amount of things that change in important
         | environments without anyone telling people outside their teams
         | in some organizations can be maddening.
        
         | dshacker wrote:
         | Even with this, there were many surprised people. I'm still
         | amazed at all of the people that can ignore everything and just
         | open their IDE and code (and maybe never see teams or email)
        
           | pvdebbe wrote:
           | In my previous company it came to me as a surprise to learn
           | from a third party that our office had moved lol.
        
           | sofixa wrote:
           | Alternatively, communications fatigue. How many emails does
           | the average employee get with nonsense that doesn't apply to
           | them? Oh cool, we have a new VP. Oh cool, that department had
           | a charity drive. Oh cool, system I've never heard of is
           | getting replaced by a new one, favourite of this guy I've
           | never heard of.
           | 
           | Add in the various spam (be it attacks or just random vendors
           | trying to sell something).
           | 
           | At some point, people start to zone out and barely skim, if
           | that, most of their work emails. Same with work chats, which
           | are also more prone to people sharing random memes or photos
           | from their picnic last week or their latest lego set.
        
             | kmoser wrote:
             | Everybody gets important emails, and it's literally part of
             | their job to filter the wheat from the chaff. One of my
             | benchmarks for someone's competency is their ability to
             | manage information. With a combination of email filters and
             | mental discipline, even the most busy inbox can be
             | manageable. But this is an acquired skill, akin to not
             | getting lost in social media, and some people are far
             | better at it than others.
        
               | BenjiWiebe wrote:
               | If the same internal sender sends both irrelevant and
               | important messages, it'll be pretty hard or impossible to
               | filter.
               | 
               | My #1 method of keeping my inbox clean, is unsubscribing
               | from newsletters.
        
               | Marsymars wrote:
               | Our HR lady took personal offence when I asked to be
               | unsubscribed from the emails about "deals" that employees
               | have access to from corporate partners. :(
        
               | Vilian wrote:
               | You can set custom rules in thunderbird to deal with
               | specific mails, like tagging it as a "sale" or just
               | deleting it based on regex
        
               | Marsymars wrote:
               | Yeah, I ended up doing that with Outlook.
               | 
               | I also set up a rule to auto-delete phishing test emails
               | based on their headers, which annoyed the security team.
        
               | kmoser wrote:
               | Yes, the last filter is always the human being who has to
               | deal with whatever the computer couldn't automate. But
               | even then, you should be able to skim an email and
               | quickly determine its relevancy, and decide whether you
               | need to take action immediately, can leave it for the
               | future, or can just delete it. Unless you're getting
               | thousands of emails a day, this should be manageable.
        
           | AdamN wrote:
           | If you read all the notifications you'll never do your actual
           | job. People who just open their IDE and code are to be
           | commended in some respects - but it's a balance of course.
        
         | xwolfi wrote:
         | What we do is we scream the day before, all of us, get replied
         | that we should have read the memo, reply we have real work to
         | do, and the thing gets cancelled last minute, a few times a
         | year, until nobody gives a fuck anymore.
        
         | indemnity wrote:
         | I feel this.
         | 
         | Every month or two, we get notifications along the FINAL
         | WARNING lines, telling us about some critical system about to
         | be deleted, or some new system that needs to be set up Right
         | Now, because it is a Corporate Standard (that was never rolled
         | out properly), and by golly we have had enough of teams
         | ignoring us, the all powerful Board has got its eyes on you
         | now.
         | 
         | It's a full time job to keep up with the never-ending churn. We
         | could probably just spend all our engineering effort being
         | compliant and never delivering features :)
         | 
         | Company name withheld to preserve my anonymity (100,000+
         | employees).
        
       | 90s_dev wrote:
       | I actually remember using Perforce back in like 2010 or
       | something. And I can't remember why or for which client or
       | employer. I just remember it was stupid.
        
         | dboreham wrote:
         | And expensive.
        
         | broodbucket wrote:
         | There's still a lot of Perforce around. I've thankfully managed
         | to avoid it but I have plenty of friends in the industry who
         | still have to use it.
        
           | HideousKojima wrote:
           | Perforce is still widely used in the game industry
        
         | gmueckl wrote:
         | Perforce is convoluted and confusing, but I don't think it's
         | really fair to call it stupid. It is still virtually unmatched
         | in a couple of areas.
        
           | 90s_dev wrote:
           | I wasn't being fair, I was being mean. Perforce is stupid and
           | ugly.
        
           | bananaboy wrote:
           | I would say it's no more convoluted and confusing than git. I
           | used Perforce professionally for quite a few years in
           | gamedev, and found that a bit confusing at first. Then I was
           | self-employed and used git, and coming to git from Perforce I
           | found it very confusing at first. But then I grew to love it.
           | Now I'm back to working for a big gamedev company and we use
           | Perforce and I feel very proficient in both.
        
         | bob1029 wrote:
         | Perforce is really nice if you need to source control 16k
         | textures next to code without thinking too much about it. Git
         | LFS absolutely works but it's more complicated and has less
         | support in industry tooling. Perforce also makes it easier to
         | purge (obliterate) old revisions of files without breaking
         | history for everyone. This can be invaluable if your p4 server
         | starts to run out of disk space.
         | 
         | The ability to lock files centrally might seem outdated by the
         | branching and PR model, but for some organizations the
         | centralized solution works way better because they have built
         | viable business processes around it. Centralized can absolutely
         | smoke distributed in terms of iteration latency if the loop is
         | tight enough and the team is cooperating well.
        
           | dazzawazza wrote:
           | I agree with everything you say except git-lfs works. For
           | modern game dev (where a full checkout is around 1TB of data)
           | git-lfs is too slow, too error prone and too wasteful of disk
           | space.
           | 
           | Perforce is a complete PITA to work with, too expensive and
           | is outdated/flawed for modern dev BUT for binary files it's
           | really the only game in town (closely followed by svn but
           | people have forgotten how good svn was and only remember how
           | bad it was at tracking branch merging).
        
             | daemin wrote:
             | Sounds like the filesystem filter is required for the files
             | in the repository and not just the metadata in the .git
             | folder.
        
         | barries11 wrote:
         | I used Perforce a lot in the 90s, when it was simple (just p4,
         | p4d, and p4merge!), super fast, and _never_ crashed or
         | corrupted itself. Way simpler, and easier to train newbies on,
         | than any of the alternatives.
         | 
         | Subdirectories-as-branches (like bare repo + workspace-per-
         | branch practices w/git) is so much easier for average computer
         | users to grok, too. Very easy to admin too.
         | 
         | No idea what the current "enterprisey" offering is like,
         | though.
         | 
         | For corporate teams, it was a game changer. So much better than
         | any alternative at the time.
         | 
         | We're all so used to git that we've become used to it's
         | terribleness and see every other system as deficient. Training
         | and supporting a bunch of SWE-adjacent users (hw eng, ee,
         | quality, managers, etc) is a really, really good reality check
         | on how horrible the git UX and datamodel is (e.g. obliterating
         | secrets--security, trade, or PII/PHI--that get accidentally
         | checked in is a stop-the-world moment).
         | 
         | For the record, I happily use git, jj, and Gitea all day every
         | day now (and selected them for my current $employer). However,
         | also FTR, I've used SCCS, CVS, SVN, VSS, TFS and MKS SI
         | professionally, each for years at a time.
         | 
         | All of the comments dismissing tools that are significantly
         | better for most use cases other than distributed OSS, but lost
         | the popularity contest, is shortsighted.
         | 
         | Git has a loooong way to go before it's as good in other ways
         | as many of its "competitors". Learning about their benefits is
         | very enlightening.
         | 
         | And, IIRC, p4 now integrates with git, though I've never used
         | it.
        
           | int_19h wrote:
           | I've used CVS, SVN, TFS, Mercurial, and Git in the past, so I
           | have plenty of exposure to different options. I have to deal
           | with Perforce in my current workplace and I have to say that
           | even from this perspective it's honestly pretty bad in terms
           | of how convoluted things are.
        
       | 90s_dev wrote:
       | In about 2010, I briefly had a contract with a security firm with
       | one dev, and there was _no_ source control, and everything
       | written was in low quality PHP. I quit after a week.
        
         | golergka wrote:
         | What kind of security services did they provide? Breaches?
        
           | layer8 wrote:
           | Job security for the dev, probably.
        
         | dshacker wrote:
         | php_final_final_v2.zip shipped to production. A classic. I had
         | a similar experience with https://www.ioncube.com/ php
         | encryption. Everything encrypted and no source control.
        
       | israrkhan wrote:
       | We did migrate from Perforce to Git for a fairly large
       | repositories, and I can relate to some of the issues. Luckily we
       | did not had to invent VFS, although git-lfs was useful for large
       | files.
        
       | carlual wrote:
       | > Authenticity mattered more than production value.
       | 
       | Thanks for sharing this authentic story! As an ex-MSFT in a
       | relatively small product line that only started switching to Git
       | from SourceDepot in 2015, right before I left, I can truly
       | empathize with how incredible a job you guys have done!
        
         | dshacker wrote:
         | Yeah, it was a whole journey. I can't believe it happened.
         | Thanks for your comment.
        
           | carlual wrote:
           | Thank you! Btw, it reminds me of the book "Showstopper" about
           | the journey of releasing Windows NT; highly recommended!
        
             | tux1968 wrote:
             | Thanks for the recommendation! I was just about to reread
             | "Soul Of A New Machine", but will try Showstopper instead,
             | since it sounds to be the same genre.
        
               | zem wrote:
               | tangentially, if you like that genre one of my favourite
               | books in it is "where wizards stay up late", about the
               | development of the internet.
        
           | hacker_homie wrote:
           | I spent a lot of time coaching people out of source depot, it
           | was touch and go there for a while. It was worth it though
           | thank you for Your effort.
        
       | MBCook wrote:
       | Could someone explain the ideas of forward integration and
       | reverse integration in Source Depot?
       | 
       | I'd never heard of Source Depot before today.
        
         | israrkhan wrote:
         | source depot is (was?) essentially a fork of perforce.
        
           | MBCook wrote:
           | The article mentioned something along those lines, but I've
           | never used it either.
           | 
           | I've only ever really used CVS, SVN, and Git.
        
             | int_19h wrote:
             | Perforce is broadly similar to SVN in semantics, and the
             | same branching logic applies to both. Basically if you have
             | the notion of long-lived main branch and feature branches
             | (and possibly an hierarchy in between, e.g. product- or
             | component-specific branches), you need to flow code between
             | them in an organized way. Forward/reverse integration
             | simply describes the direction in which this is done - FI
             | for main -> feature, RI for feature -> main.
        
         | dshacker wrote:
         | RI/FI is similar to having long-lived branches in Git. Imagine
         | you have a "develop-word" branch in git. The admins for that
         | branch would merge all of the changes of their code to "main"
         | and from "main" to their long lived branches. It was a little
         | bit different than long-lived git branches as they also had a
         | file filter (my private branch only had onenote code and it was
         | the "onenote" branch)
        
           | mikepurvis wrote:
           | I've long wanted a hosted Git service that would help me
           | maintain long lived fork branches. I know there's some
           | necessary manual work that is occasionally required to
           | integrate patches, but the existing tooling that I'm familiar
           | with for this kind of thing is overly focused on Debian
           | packaging (quilt, git-buildpackage) and has horrifyingly poor
           | ergonomics.
           | 
           | I'd love a system that would essentially be a source control
           | of my patches, while also allowing a first class view of the
           | upstream source + patches applied, giving me clear controls
           | to see exactly when in the upstream history the breakages
           | were introduced, so that I'm less locking in precise upstream
           | versions that can accept the patches, and more actively
           | engaging with _ranges_ of upstream commits /tags.
           | 
           | I can't imagine how such a thing would actually be
           | commercially useful, but darned if would be an obvious fit
           | for AI to automatically examine the upstream and patch
           | history and propose migrations.
        
         | dybber wrote:
         | We had a similar setup, also with a homegrown VCS developed
         | internally in our company, where I sometimes acted as branch
         | admin. I'm not sure it worked exactly like Source Depot, but I
         | can try to explain it.
         | 
         | Basically instead of everyone creating their own short-lived
         | branches (expensive operation), you would have long-lived
         | branches that a larger group of people would commit to (several
         | product areas). The branch admins job was then to get the work
         | all of these people forward integrated to a branch upwards in
         | the hierarchy. This was attempted a few times per day, but if
         | tests failed you would have to reach out to the responsible
         | people to get those test fixed. Then later, when you get the
         | changes merged upwards, some other changes have also been made
         | to the main integration branch, and now you need to pull these
         | down into your long lived branch - reverse integration - such
         | that your branch is up to date with everyone else in the
         | company.
        
           | neerajsi wrote:
           | At least in the Windows group, we use ri and fi oppositely
           | from how you describe. RI = sharing code with a broader group
           | of people toward trunk. FI = absorbing code created by the
           | larger group of people on the dev team. Eventually we do a
           | set of release forks that are isolated after a final set of
           | FIs, so really outside customers get code via FI and then
           | cherry pick style development.
        
       | BobbyTables2 wrote:
       | I'd like to know when Microsoft internally migrated away from
       | Visual SourceSafe...
       | 
       | They should have recalled it to avoid continued public use...
        
         | dshacker wrote:
         | I didn't even know Microsoft SourceSafe existed.
        
           | masklinn wrote:
           | Lucky you. Definitely one of the worst tools I've had the
           | displeasure of working with. Made worse by people building on
           | top of it for some insane reason.
        
             | moron4hire wrote:
             | It was at least a little better than CVS, but with SVN
             | available at the same time, never understood the mentality
             | of the offices that I worked at using Source Safe instead
             | of SVN.
        
               | masklinn wrote:
               | > It was at least a little better than CVS
               | 
               | Highly debatable.
               | 
               | CVS has a horrendous UI, but didn't have a tendency to
               | corrupt itself at the drop of a hat and didn't require
               | locking files to edit them by default (and then require a
               | repository admin to come in and unlock files when a
               | colleague went on holidays with files checked out). Also
               | didn't require shared write access to an SMB share (one
               | of the reasons it corrupted itself so regularly).
        
             | mickeyp wrote:
             | Agreed. It had a funny habit of corrupting its own data
             | store also. That's absolutely what you want in a source
             | control system.
             | 
             | It sucked; but honestly, not using anything is even worse
             | than SourceSafe.
        
               | masklinn wrote:
               | > Agreed. It had a funny habit of corrupting its own data
               | store also. That's absolutely what you want in a source
               | control system.
               | 
               | I still 'member articles calling it a source destruction
               | system. Good times.
               | 
               | > It sucked; but honestly, not using anything is even
               | worse than SourceSafe.
               | 
               | There have always been alternatives. And even when you
               | didn't use anything, at least you knew what to expect.
               | Files didn't magically disappear from old tarballs.
        
             | TowerTall wrote:
             | I remember when we migrated from Visual Source Safe to TFS
             | at my place of work. I was in charge of the migration and
             | we hit errors and opened a ticket with Microsoft Premier
             | Support. The ticket ended up being assigned to one of
             | creators of Source Safe who replied "What you are seeing is
             | not possible". He did manage to solve it in the end after a
             | lot of head scratching.
        
           | codeulike wrote:
           | We used it. We knew no better. It was different then, you
           | might not hear about alternatives unless you went looking for
           | them. Source Safe was integrated with Visual Studio so was an
           | obvious choice for small teams.
           | 
           | Get this; if you wanted to change a file you had to check it
           | out. It was then locked and no-one else could change it.
           | Files were literally read only on your machine unless you
           | checked them out. The 'one at a time please' approach to
           | Source Control (the other approach being 'lets figure out how
           | to merge this later')
        
             | namdnay wrote:
             | I remember a big commercial SCM at the time that had this
             | as an option, when you wanted to make sure you wouldn't
             | need to merge. Can't remember what it was called, you could
             | "sync to file system" a bit like dropbox and it required
             | teams of full time admins to build releases and cut
             | branches and stuff . Think it was bought by IBM?
        
               | robin_reala wrote:
               | I guess you're talking about Rational Rose? I had the
               | misfortune of using that at my first industry job
               | (fintech in 2004).
        
               | meepmorp wrote:
               | Rose is a UML modeling tool
        
               | robin_reala wrote:
               | Oops, it was ClearCase that was Rational's SCM:
               | https://en.wikipedia.org/wiki/IBM_DevOps_Code_ClearCase
        
               | becurious wrote:
               | ClearCase?
        
             | rswail wrote:
             | Which is exactly how CVS (and its predecessors RCS and
             | SCCS) worked.
             | 
             | They were _file_ based revision control, not repository
             | based.
             | 
             | SVN added folders like trunk/branches/tags that overlaid
             | the file based versioning by basically creating copies of
             | the files under each folder.
             | 
             | Which is why branch creation/merging was such a complicated
             | process, because if any of the files didn't merge, you had
             | a half merged branch source and a half merged branch
             | destination that you had to roll back.
        
               | fanf2 wrote:
               | CVS was called the "concurrent version system" because it
               | did _not_ lock files on checkout. Nor did svn. Perforce
               | does.
        
               | rswail wrote:
               | True dat, my mistake. That was its major feature, from
               | memory though it still used the same reversed diff file
               | format?
        
               | ack_complete wrote:
               | Perforce does not lock files on checkout unless you have
               | the file specifically configured to enforce exclusive
               | locking in the file's metadata or depot typemap.
        
               | umanwizard wrote:
               | I am quite sure that you can edit files in an svn repo to
               | your heart's content regardless of whether anyone else is
               | editing them on their machine at the same time.
        
               | masklinn wrote:
               | Yep, svn has a lock feature but it is opt-in per file
               | (possibly filetype?)
               | 
               | A pretty good tradeoff, because you can set it on complex
               | structured files (e.g. PSDs and the like) to avoid the
               | ballache of getting a conflict in an unmergeable file but
               | it does not block code edition.
               | 
               | And importantly anyone can steal locks by default. So a
               | colleague forgetting to unlock and going on holidays does
               | not require finding a repo admin.
        
             | pjc50 wrote:
             | The lock approach is still used in IC design for some of
             | the Cadence/Synopsis data files which are unmergable
             | binaries. Not precisely sure of the details but I've heard
             | it from other parts of the org.
        
               | dagw wrote:
               | A lot of engineering is the same. You cannot diff and
               | merge CAD files, so you lock them.
        
               | malkia wrote:
               | Similar in video game shops - lots of binary files, or
               | even huge (non-editable by human) text ones.
        
             | Disposal8433 wrote:
             | The file lock was a fun feature when a developer forgot to
             | unlock it and went on holidays. Don't forget the black hole
             | feature that made files randomly disappear for no reason.
             | It may have been the worst piece of software I have ever
             | used.
        
           | qingcharles wrote:
           | It was pretty janky. We used it in the gamedev world in the
           | 90s once the migration to Visual C started.
        
         | pianoben wrote:
         | I don't know that they _ever_ used it internally, certainly not
         | for anything major. If they had, they probably wouldn 't have
         | sold it as it was...
         | 
         | Can't explain TFS though, that was still garbage internally and
         | externally.
        
         | RandallBrown wrote:
         | I doubt most teams ever used it.
         | 
         | I spent a couple years at Microsoft and our team used Source
         | Depot because a lot of people thought that our products were
         | special and even Microsoft's own source control (TFS at the
         | time) wasn't good enough.
         | 
         | I had used TFS at a previous job and didn't like it much, but I
         | really missed it after having to use Source Depot.
        
           | RyJones wrote:
           | USGEO used it in the late 90s, as well as RAID
        
           | jbergens wrote:
           | I was surprised that TFS was not mentioned in the story (at
           | least not as far as I have read).
           | 
           | It should have existed around the same time and other parts
           | of MS were using it. I think it was released around 2005 but
           | MS probably had it internally earlier.
        
             | canucker2016 wrote:
             | SLM (aka slime, shared file-system source code control
             | system) was used in most of MS, aka systems & apps.
             | 
             | NT created (well not NT itself, IIRC, there was some an MS-
             | internal developer tools group in charge)/moved to source
             | depot since a shared file-system doesn't scale well to
             | thousands of users. Especially if some file gets locked and
             | you DoS the whole division.
             | 
             | Source depot became the SCCS of choice (outside of Dev
             | Division).
             | 
             | Then git took over, and MS had to scale git to NT-size
             | scale, and upstream many of the changes to git mainline.
             | 
             | Raymond Chen has a blog that mentions much of this - https:
             | //devblogs.microsoft.com/oldnewthing/20180122-00/?p=97...
        
             | int_19h wrote:
             | TFS was used heavily by DevDiv, but as far as I know they
             | never got perf to the point where Windows folk were
             | satisfied with it on their monorepo.
             | 
             | It wasn't too bad for a centralized source control system
             | tbh. Felt a lot like SVN reimagined through the prism of
             | Microsoft's infamous NIH syndrome. I'm honestly not sure
             | why anyone would use it over SVN unless you wanted their
             | deep integration with Visual Studio.
        
         | mattgrice wrote:
         | Around 2000? The only project I ever knew that used it was .NET
         | and that was on SD by around then.
        
       | RyJones wrote:
       | I was on the team that migrated Microsoft from XNS to TCP/IP - it
       | was way less involved, but similar lessons learned.
       | 
       | Migrating from MSMAIL -> Exchange, though - that was rough
        
         | aaronbrethorst wrote:
         | Is that what inspired the "Exchange: The Most Feared and
         | Loathed Team in Microsoft" license plate frames? I'm probably
         | getting a bit of the wording wrong. It's been nearly 20 years
         | since I saw one.
        
           | RyJones wrote:
           | Probably. A lot of people really loved MSMAIL; not so much
           | Exchange.
           | 
           | I have more long, boring stories about projects there, but
           | that's for another day
        
             | canucker2016 wrote:
             | And sometimes they loved MSMAIL for the weirdest reasons...
             | 
             | MSMAIL was designed for Win3.x. Apps didn't have multiple
             | threads. The MSMAIL client app that everyone used would
             | create the email to be sent and store the email file on the
             | system.
             | 
             | An invisible app, the Mail Pump, would check for email to
             | be sent and received during idle time (N.B. Other apps
             | could create/send emails via APIs, so you couldn't have the
             | email processing logic in only the MSMAIL client app).
             | 
             | So the user could hit the Send button and the email would
             | be moved to the Outbox to be sent. The mail pump wouldn't
             | get a chance to process the outgoing email for a few
             | seconds, so during that small window, if the user decided
             | that they had been too quick to reply, they could retract
             | that outgoing email. Career-limited move averted.
             | 
             | Exchange used a client-server architecture for email. Email
             | client would save the email in the outbox and the server
             | would notice the email almost instantly and send it on its
             | way before the user blinked in most cases.
             | 
             | A few users complained that Exchange, in essence, was too
             | fast. They couldn't retract a misguided email reply, even
             | if they had reflexes as quick as the Flash.
        
               | RyJones wrote:
               | I re-wrote MSPAGER for Exchange. Hoo boy what a hack that
               | was! A VB3 app running as a service, essentially. I don't
               | know if you remember romeo and juliet; those were PCs
               | pulled from pc-recycle by a co-worker to serve install
               | images.
        
               | mschuster91 wrote:
               | > A few users complained that Exchange, in essence, was
               | too fast.
               | 
               | That is something that's actually pretty common and
               | called "benevolent deception" - it has been discussed on
               | HN some years past, too [1].
               | 
               | [1] https://news.ycombinator.com/item?id=16289380
        
       | palmotea wrote:
       | What's the connection (if any) between "Source Depot" and TFSVC?
        
         | tamlin wrote:
         | Source Depot was based on Perforce. Microsoft bought a license
         | for the Perforce source code and made changes to work at
         | Microsoft scale (Windows, Office).
         | 
         | TFS was developed in the Studio team. It was designed to work
         | on Microsoft scale and some teams moved over to it (SQL
         | server). It was also available as a fairly decent product
         | (leagues better than SourceSafe).
        
         | nfg wrote:
         | None that I know of, Source Depot is derived from Perforce.
        
       | hulitu wrote:
       | > Microsoft Office migration from Source Depot to Git
       | 
       | Will they get an annoing window, in the midle of the migration,
       | telling them that Office must be updated now, or the world will
       | end ?
        
       | ksynwa wrote:
       | Not doubting it but I don't understand how a shallow clone of
       | OneNote would be 200GB.
        
         | paulddraper wrote:
         | Must have videos or binaries.
        
           | LtWorf wrote:
           | They probably vendor every single .dll it uses.
        
             | skrebbel wrote:
             | that's a lot of .dll files!
        
         | dshacker wrote:
         | Shallow clone of all of office, not onenote.
        
           | ksynwa wrote:
           | Oh alright. Thanks.
        
       | carlhjerpe wrote:
       | This article makes out thousands of engineers that are good
       | enough to qualify at Microsoft and work on Office but haven't
       | used git yet? That sounds a bit overplayed tbh, if you haven't
       | used git you must live under a rock. You can't use Source Depot
       | at home.
       | 
       | Overall good story though
        
         | dshacker wrote:
         | You'd be surprised at the amount of people at Microsoft that
         | their entire career have been at Microsoft (pre-git-creation)
         | that never used Git. Git is relatively new (2005) but source
         | control systems are not.
        
           | shakna wrote:
           | That's still two decades. Git is so popular Microsoft bought
           | one of the major forges 7 years ago.
           | 
           | To have never touched it in the last decade? You've got a gap
           | in your CV.
        
             | stockerta wrote:
             | Not everyone wants to code for hobby, so if their work not
             | uses git then they too will not use it.
        
               | dkdbejwi383 wrote:
               | Not everyone _can_ code as a hobby. Some of us are old
               | and have families and other commitments
        
               | shakna wrote:
               | That's when you can only hope that your workplace is one
               | that trains - so the investment isn't one sided.
        
               | qingcharles wrote:
               | Agreed. In my professional career, the vast majority of
               | devs I've worked with never wrote a single line of code
               | outside of the office.
        
             | bdcravens wrote:
             | The same could be said of .NET, Wordpress, or Docker.
        
               | shakna wrote:
               | Yes? If its in your field, like a webdev who has never
               | touched Wordpress, it can be surprising. An automated
               | tester who has never tried containers also has a problem.
               | 
               | These are young industries. So most hiring teams expect
               | that you take the time to learn new technologies as they
               | become established.
        
             | AdamN wrote:
             | This is one of the problems at big tech - people 10-20
             | years in and haven't lived in the outside world. It's a
             | hard problem to solve.
        
             | Freak_NL wrote:
             | I believe it. If you are a die-hard Microsoft person, your
             | view of computing would be radically different from even
             | the average developer today, let alone devs who are used to
             | using FOSS.
             | 
             | Turn it around: If I were to apply for a job at Microsoft,
             | they would probably find that my not using Windows for over
             | twenty years is a gap on my CV (not one I would care to
             | fill, mind).
        
               | int_19h wrote:
               | It would very much depend on the team. There's no
               | shortage of those that ship products for macOS and Linux,
               | and sometimes that can even be the dominant platform.
        
         | YPPH wrote:
         | It's entirely plausible that a long-term engineer at Microsoft
         | wouldn't have have used git. I'm sure a considerable number of
         | software engineers don't program as a hobby.
        
         | lIl-IIIl wrote:
         | Sure you can use Source Depot (actually Perforce) at home:
         | https://www.perforce.com/p/vcs/vc/free-version-control
        
           | YPPH wrote:
           | I think Source Depot is a proprietary fork with a lot of
           | Microsoft-stuff added in.
        
         | compiler-guy wrote:
         | It only takes a week to learn enough git to get by, and only a
         | month or two to become every-day use proficient. Especially if
         | one is already familiar with perforce, or svn, or other VCS.
         | 
         | Yes, there is a transition, no it isn't really that hard.
         | 
         | Anyone who views lack of git experience as a gap in a CV is
         | selecting for the wrong thing.
        
       | AdamN wrote:
       | I feel like we're well into the longtail now. Are there other SCM
       | systems or is it the end of history for source control and git is
       | the one and done solution?
        
         | masklinn wrote:
         | Mercurial still has some life to it (excluding Meta's fork of
         | it), jj is slowly gaining, fossil exists.
         | 
         | And afaik P4 still does good business, because DVCS in general
         | and git in particular remain pretty poor at dealing with large
         | binary assets so it's really not great for e.g. large gamedev.
         | Unity actually purchased PlasticSCM a few years back, and has
         | it as part of their cloud offering.
         | 
         | Google uses its own VCS called Piper which they developed when
         | they outgrew P4.
        
           | zem wrote:
           | google also has a mercurial interface to piper
        
         | linkpuff wrote:
         | There are some other solutions (like jujutsu, which while using
         | git as storage medium, has some differences in the handling of
         | commits). But I do believe we reached a critical point where
         | git is the one stop shop for all the source control needs
         | despite it's flaws/complexity.
        
         | dgellow wrote:
         | Perforce is used in game dev, animation, etc. git is pretty
         | poor at dealing with lots of really large assets
        
           | qiine wrote:
           | why is this still the case ?
        
             | rwmj wrote:
             | I've been checking in large (10s to 100s MBs) tarballs into
             | one git repo that I use for managing a website archive for
             | a few years, and it can be made to work but it's very
             | painful.
             | 
             | I think there are three main issues:
             | 
             | 1. Since it's a distributed VCS, everyone must have a whole
             | copy of the entire repo. But that means anyone cloning the
             | repo or pulling significant commits is going to end up
             | downloading vast amounts of binaries. If you can directly
             | copy the .git dir to the other machine first instead of
             | using git's normal cloning mechanism then it's not _as_
             | bad, but you 're still fundamentally copying everything:
             | $ du -sh .git       55G .git
             | 
             | 2. git doesn't "know" that something is a binary (although
             | it seems to in some circumstances), so some common
             | operations try to search them or operate on them in other
             | ways as if they were text. (I just ran git log -S on that
             | repo and git ran out of memory and crashed, on a machine
             | with 64GB of RAM).
             | 
             | 3. The cure for this (git lfs) is worse than the disease.
             | LFS is so bad/strange that I _stopped_ using it and went
             | back to putting the tarballs in git.
        
               | dh2022 wrote:
               | Why would someone check binaries in a repo? The only time
               | I came across checked binaries in a repo was because that
               | particular dev could not be bothered to learn nuget /
               | MAVEN. (the dev that approved that PR did not understand
               | that either)
        
               | masklinn wrote:
               | Because it's way easier if you don't require every level
               | designer to spend 5 hours recompiling everything before
               | they can get to work in the morning, because it's way
               | easier to just checkin that weird DLL than provide weird
               | instructions to retrieve it, because onboarding is much
               | simpler if all the tools are in the project, ...
               | 
               | And it's no sweat off p4's back.
        
               | dh2022 wrote:
               | Hmm, I do not get it.... "The binaries are checked in the
               | repo so that that the designer would not spend 5 hours
               | recompiling" vs "the binaries come from a nuget site so
               | that the designed would not spend 5 hours recompiling".
               | 
               | In both cases the designer does not recompile, but in the
               | second case there are no checked in binaries in the
               | repo... I still think nuget / MAVEN would be more
               | appropriate for this task...
        
               | masklinn wrote:
               | Everything is in P4: you checkout the project to work on
               | it, you have everything. You update, you have everything
               | up to date. All the tools are there, so any part of the
               | pipeline can rely on anything that's checked in. You need
               | an older version, you just check that out and off you go.
               | And you have a single repository to maintain.
               | 
               | VCS + Nuget: half the things are in the VCS, you checkout
               | the project and then you have to hunt down a bunch of
               | packages from a separate thing (or five), when you update
               | the repo you have to update the things, hopefully you
               | don't forget any of the ones you use, scripts run on a
               | prayer that you have fetched the right things or they
               | crash, version sync is a crapshoot, hope you're not
               | working on multiple projects at the same time needing
               | different versions of a utility either. Now you need 15
               | layers of syncing and version management on top of each
               | project to replicate half of what just checking
               | everything into P4 gives you for free.
        
               | dh2022 wrote:
               | I have no idea what environment / team you worked on but
               | nuget is pretty much rock solid. There are no scripts
               | running on a prayer that everything is fetched. Version
               | sync is not a crapshot because nuget versions are updated
               | during merges and with proper merge procedures (PR build
               | + tests) nuget versions are always correct on the main
               | branch.
               | 
               | One does not forget what nugets are used: VS projects do
               | that bookkeeping for you. You update the VS project with
               | the new nugets your task requires; and this bookkeeping
               | will carry on when you merge your PR.
               | 
               | I have seen this model work with no issues in large
               | codebases: VS solutions with upwards of 500,000 lines of
               | code and 20-30 engineers.
        
               | nyarlathotep_ wrote:
               | > VCS + Nuget: half the things are in the VCS, you
               | checkout the project and then you have to hunt down a
               | bunch of packages from a separate thing
               | 
               | Oh, and there's things like x509/proxy/whatever errors
               | when on a corpo machine that has ZScaler or some such, so
               | you have to use internal Artifactory/thing but that
               | doesn't have the version you need or you need permissions
               | to access so.. and etc etc.
        
               | rwmj wrote:
               | Because it's (part of) a website that hosts the tarballs,
               | and we want to keep the whole site under version control.
               | Not saying it's a _good_ reason, but it is a reason.
        
               | suriya-ganesh wrote:
               | This is a problem that occurs across game development to
               | ML datasets.
               | 
               | We built oxen to solve this problem
               | https://github.com/Oxen-AI/Oxen (I work at Oxen.ai)
               | 
               | Source control for large data. Currently our biggest
               | repository is 17 TB. would love for you to try it out.
               | It's open source, so you can self host as well.
        
           | nyarlathotep_ wrote:
           | I've heard this about game dev before. My (probably only
           | somewhat correct) understanding is it's more than just source
           | code--are they checking in assets/textures etc? Is perforce
           | more appropriate for this than, say, git lfs?
        
             | malkia wrote:
             | And often binaries: .exe, .dll, even .pdb files.
        
               | nyarlathotep_ wrote:
               | Interesting. Seems antithetical to the 'git centered'
               | view of being for source code only (mostly)
               | 
               | I think I read somewhere that game dev teams would also
               | check in the actual compiler binary and things of that
               | nature into version control.
               | 
               | Usually it's considered "bad practice" when you see,
               | like, and entire sysroot of shared libs in a git
               | repository.
               | 
               | I don't even have any feeling one way or another. Even
               | today "vendoring" cpp libraries (typically as source)
               | isn't exactly rare. I'm not even sure if this is always a
               | "bad" thing in other languages. Everyone just seems to
               | have decided that relying on a/the package manager and
               | some sort of external store is the Right Way. In some
               | sense, it's harder to make the case for that.
        
             | int_19h wrote:
             | I'm not sure about the current state of affairs, but I've
             | been told that git-lfs performance was still not on par
             | with Perforce on those kinds of repos a few years ago.
             | Microsoft was investing a lot of effort in making it work
             | for their large repos though so maybe it's different now.
             | 
             | But yeah, it's basically all about having binaries in
             | source control. It's not just game dev, either - hardware
             | folk also like this for their artifacts.
        
             | masklinn wrote:
             | Assets, textures, design documents, tools, binary
             | dependencies, etc...
             | 
             | And yes, p4 just rolls with it, git lfs is a creacky hack.
        
         | foooorsyth wrote:
         | git by itself is often unsuitable for XL codebases. Facebook,
         | Google, and many other companies / projects had to augment git
         | to make it suitable or go with a custom solution.
         | 
         | AOSP with 50M LoC uses a manifest-based, depth=1 tool called
         | repo to glue together a repository of repositories. If you're
         | thinking "why not just use git submodules?", it's because git
         | submodules has a rough UX and would require so much wrangling
         | that a custom tool is more favorable.
         | 
         | Meta uses a custom VCS. They recently released sapling:
         | https://sapling-scm.com/docs/introduction/
         | 
         | In general, the philosophy of distributed VCS being better than
         | centralized is actually quite questionable. I want to know what
         | my coworkers are up to and what they're working on to avoid
         | merge conflicts. DVCS without constant out-of-VCS
         | synchronization causes more merge hell. Git's default packfile
         | settings are nightmarish -- most checkouts should be depth==1,
         | and they should be dynamic only when that file is accessed
         | locally. Deeper integrations of VCS with build systems and file
         | systems can make things even better. I think there's still tons
         | of room for innovation in the VCS space. The domain naturally
         | opposes change because people don't want to break their core
         | workflows.
        
           | msgodel wrote:
           | git submodules have a bad ux but it's certainly not worse
           | than Android's custom tooling. I understand why they did it
           | but in retrospect that seems like an obvious mistake to me.
        
           | WorldMaker wrote:
           | It's interesting to point out that almost all of Microsoft's
           | "augmentations" to git have been open source and _many_ of
           | them have made it into git upstream already and come  "ready
           | to configure" in git today ("conical" sparse checkouts, a lot
           | of steady improvements to sparse checkouts, git commit-graph,
           | subtle and not-so-subtle packfile improvements, reflog
           | improvements, more). A lot of it is opt-in stuff because of
           | backwards compatibility or extra overhead that small/medium-
           | sized repos won't need, but so much of it is there to be used
           | by anyone, not just the big corporations.
           | 
           | I think it is neat that at least one company with mega-repos
           | is trying to lift all boats, not just their own.
        
             | kccqzy wrote:
             | Meta and Google both have been using mercurial and they
             | have also been contributing back to upstream mercurial.
        
       | 2d8a875f-39a2-4 wrote:
       | Always nice to read a new retelling of this old story.
       | 
       | TFA throws some shade at how "a single get of the office repo
       | took some hours" then elides the fact that such an operation was
       | practically impossible to do on git at all without creating a new
       | file system (VFS). Perforce let users check out just the parts of
       | a repo that they needed, so I assume most SD users did that
       | instead of getting every app in the Office suite every time. VFS
       | basically closes that gap on git ("VFS for Git only downloads
       | objects as they are needed").
       | 
       | Perforce/SD were great for the time and for the centralised VCS
       | use case, but the world has moved on I guess.
        
         | daemin wrote:
         | Some companies have developed their own technology like VFS for
         | use with Perforce, so you can check out the entire suite of
         | applications but only pull the files when you try to access
         | them in a specific way. This is a lot more important in game
         | development where massive source binary assets are stored along
         | side text files.
         | 
         | It uses the same technology that's built into Windows that the
         | remote drive programs (probably) use.
         | 
         | Personally I kind of still want some sort of server based VCS
         | which can store your entire companies set of source without
         | needing to keep the entire history locally when you check out
         | something. But unfortunately git is still good enough to use on
         | an ad-hoc basis between machines for me that I don't feel the
         | need to set up a central server and CI/CD pipeline yet.
         | 
         | Also being able to stash, stage hunks, and interactively rebase
         | commits are features that I like and work well with the way I
         | work.
        
           | sixothree wrote:
           | Doesn't SVN let you check out and commit any folder or file
           | at any depth of a project you choose? Maybe not the checkouts
           | and commit, but that log history for a single subtree is
           | something I miss from the SVN tooling.
        
             | gilbertbw wrote:
             | Can you not achieve the log history on a subtree with `git
             | log my/subfolder/`? Tools like TortoiseGit let you right
             | click on a folder and view the log of changes to it.
        
               | daemin wrote:
               | Yes it can, but the point is that in a git repo you store
               | the entire history locally, so whenever you clone a repo,
               | you clone its history on at least one branch.
               | 
               | So when you have a repo that's hundreds of GB in size,
               | the entire history can be massive.
        
             | int_19h wrote:
             | You can indeed. The problem with this strategy is that now
             | you need to maintain the list of directories that needs to
             | be checked out to build each project. And unless this is
             | automated somehow, the documentation will gradually diverge
             | from reality.
        
         | noitpmeder wrote:
         | My firm still uses perforce and I can't say anyone likes it at
         | this point. You can almost see the light leaves the eyes of new
         | hires when you tell them we don't use git like the rest of the
         | world.
        
           | kccqzy wrote:
           | I cannot believe that new hires would be upset by the choice
           | of version control software. They joined a new company after
           | so many hoops and it's on them for having an open mind
           | towards processes and tools in the new company.
        
             | axus wrote:
             | If companies don't cater to the whims of the youth, they'd
             | have to hire... _old people_
        
               | xeromal wrote:
               | e-gad!
        
               | Tostino wrote:
               | But those cost so much more!
        
               | bongodongobob wrote:
               | But they are _Analysts_ and know corporate speak and are
               | really good at filling their schedules with meetings!
               | They must be so busy doing very meaningful work!
        
             | DanielHB wrote:
             | I almost cried of happiness when we moved to git from SVN
             | on my first job after being there for 6 months
             | 
             | They might not be upset on the first few weeks but after a
             | month or so they will be familiar with the pain.
        
               | kccqzy wrote:
               | Oh a month is definitely enough time.
        
             | mattl wrote:
             | I worked with someone who was surprised the company didn't
             | use Bitbucket and Discord. They were unhappy about both.
        
               | evilduck wrote:
               | Discord I get, at least from a community or network
               | effect, but Bitbucket? I can't figure out why anyone but
               | a CTO looking to save a buck would prefer Bitbucket.
        
               | mattl wrote:
               | I cannot imagine many jobs use Discord over Slack/Teams
               | unless they're gaming related. This was not a gaming
               | related job.
        
               | connicpu wrote:
               | We use BitBucket where I work. Due to certain export
               | regulations it's simpler for us to keep as many services
               | as possible on-prem if they're going to contain any of
               | our intellectual property, so BitBucket Server it is.
               | There are other options of course, but all of the cloud
               | solutions were off the table.
        
               | tough wrote:
               | sorry for the tangent, how you deal with AI?
        
               | Kwpolska wrote:
               | Why would you expect them to? It's really easy to live
               | without AI.
        
               | tough wrote:
               | nothing prevents them to run a gpu locally or on their
               | own infra.
               | 
               | I was asking because I wonder what the enterprises that
               | want to both use AI on their workflows like LLM's and
               | have air-gap owned 100% data and pipelines are doing rn.
               | 
               | Feels like one of the few areas where to compete with big
               | labs to me, might be wrong
        
               | bigstrat2003 wrote:
               | Presumably they don't use it.
        
               | const_cast wrote:
               | I actually quite like the interface of bitbucket. I think
               | it's better, in a lot of ways, compared to gitlab and
               | github.
               | 
               | What I hate about bitbucket is how stagnated it is.
        
             | Marsymars wrote:
             | I feel like I've got an open mind towards processes and
             | tools; the problem with a company using anything other than
             | Git at this point is that unless they have a good
             | explanation for it, it's not going to be an indicator that
             | the company compared the relative merits of VCS systems and
             | chose something other than Git - it's going to be an
             | indicator that the company doesn't have the bandwidth or
             | political will to modernize legacy processes.
        
               | tough wrote:
               | maybe they're on bazel?
        
               | kccqzy wrote:
               | Well bazel is not a tool for version control.
        
               | tough wrote:
               | damnit i was thinking jujutsu and got owned lol
               | https://github.com/jj-vcs/jj
        
               | kccqzy wrote:
               | Yeah but as a new hire, one doesn't yet know whether
               | there is a good explanation for using a non-git tool. It
               | takes time to figure that out.
               | 
               | A legacy tool might be bad, or it might be very good but
               | just unpopular. A company that devotes political will to
               | modernize for the sake of modernizing is the kind of
               | craziness we get in the JS ecosystem.
        
             | jayd16 wrote:
             | A craftsman appreciates good tools.
        
               | kccqzy wrote:
               | Is git a good tool then? Not necessarily. Some still
               | think hg is better. Others think newer tools like jj are
               | even better while being git compatible.
        
             | int_19h wrote:
             | Perforce is sufficiently idiosyncratic that it's kinda
             | annoying even when you remember the likes of SVN. Coming to
             | it from Git is a whole world of pain.
        
             | inglor wrote:
             | The problem is that you come to a prestigious place like
             | Microsoft and end up using horrible outdated software.
             | 
             | Credit where credit is due at my time at Excel we did
             | improve things a lot (migration from Script# to TypeScript,
             | migration from SourceDepot to git, shorter dev loop and
             | better tooling etc) and a large chunk of development time
             | was spent on developer tooling/happiness.
             | 
             | But it does suck to have to go to one of the old places and
             | use sourcedepot and `osubmit` the "make a change" tool and
             | then go over 16 popups in the "happy path" to submit your
             | patch for review (also done in a weird windows gui review
             | tool)
             | 
             | Git was quite the improvement :D
        
             | filoleg wrote:
             | > I cannot believe that new hires would be upset by the
             | choice of version control software.
             | 
             | I can, if the version control software is just not up to
             | standards.
             | 
             | I absolutely didn't mind using mercurial/hg, even though I
             | literally haven't touched it until that point and knew
             | nothing about it, because it is actually pretty good. I
             | like it more than git now.
             | 
             | Git is a decent option that most people would be familiar
             | with, cannot be upset about it either.
             | 
             | On another hand, Source Depot sucked badly, it felt like I
             | had to fight against it the entire time. I wasn't upset
             | because it was unfamiliar to me. In fact, the more familiar
             | I got with it, the more I disliked it.
        
           | 2d8a875f-39a2-4 wrote:
           | Yeah it's an issue for new devs for sure. TFA even makes the
           | point, "A lot of people felt refreshed by having better
           | transferable skills to the industry. Our onboarding times
           | were slashed by half".
        
             | tom_ wrote:
             | Interesting to hear it was so much of a problem in terms of
             | onboarding time. Maybe Source Depot was particularly weird,
             | and/or MS were using it in a way that made things
             | particularly complicated? Perforce has never felt
             | especially difficult to use to me, and programmers never
             | seem to have any difficulty with it. Artists and designers
             | seem to pick it up quite quickly too. (By and large, in
             | contrast to programmers, they are less in the habit of
             | putting up with the git style of shit.)
        
               | chokolad wrote:
               | > Interesting to hear it was so much of a problem in
               | terms of onboarding time. Maybe Source Depot was
               | particularly weird, and/or MS were using it in a way that
               | made things particularly complicated?
               | 
               | It was not. It was literally a fork of perforce with
               | executable renamed to sd.exe from p4. Command line was
               | pretty much identical.
        
           | Degorath wrote:
           | Can't say anything about perforce as I've never used it, but
           | I'd give my left nut to get Google's Piper instead of git at
           | work :)
        
         | swsieber wrote:
         | I'm a bit surprised git doesn't offer a way to checkout only
         | specific parts of the git tree to be honest. It seems like it'd
         | be pretty easy to graft on with an intermediate service that
         | understands object files, etc.
        
           | jjmarr wrote:
           | It's existed for a while. Partial clones and LFS.
           | 
           | https://git-scm.com/docs/partial-clone
        
             | swsieber wrote:
             | Thanks!
        
         | socalgal2 wrote:
         | VFS does not replace Perforce. Most AAA game companies still
         | use Perforce. In particular, they need locks on assets so two
         | people don't edit them at the same time and have an unmergable
         | change and wasted time as one artist has to throw their work
         | away
        
       | 0points wrote:
       | Having used vss in the 90s myself, it surprised me it wasn't even
       | mentioned.
       | 
       | VSS (Visual SourceSafe) being Microsoft's own source versioning
       | protocol, unlike Source Depot which was licensed from Perforce.
        
         | tamlin wrote:
         | Yes, I used VSS as a solo developer in the 90s. It was a
         | revelation at the time. I met other VCS systems at grad school
         | (RCS, CVS).
         | 
         | I started a job at MSFT in 2004 and I recall someone explaining
         | that VSS was unsafe and prone to corruption. No idea if that
         | was true, or just lore, but it wasn't an option for work
         | anyway.
        
           | mmastrac wrote:
           | We used to call it Visual Source Unsafe because it was
           | corrupting repos all the time.
        
             | skipkey wrote:
             | As I recall, one problem was you got silent corruption if
             | you ran out of disk space during certain operations, and
             | there were things that took significantly more disk space
             | while in flight than when finished, so you wouldn't even
             | know.
             | 
             | When I was at Microsoft, Source Depot was the nicer of the
             | two version control systems I had to use. The other, Source
             | Library Manager, was much worse.
        
             | meepmorp wrote:
             | iirc, we called it visual source shred
             | 
             | kinda nice to know it wasn't just our experience
        
           | sumtechguy wrote:
           | The integration with sourcesafe and all of the tools was
           | pretty cool back then. Nothing else really had that level of
           | integration at the time. However, VSS was _seriously_ flakey.
           | It would corrupt randomly for no real reason. Daily backups
           | were always being restored in my workplace. Then they picked
           | PVCS. At least it didnt corrupt itself.
           | 
           | I think VSS was fine if you used it on a local machine. If
           | you put it on a network drive things would just flake out. It
           | also got progressively worse as newer versions came out. Nice
           | GUI, very straight forward to teach someone how to use it
           | (checkout file, change, check in like a book), random
           | corruptions about sums up VSS. That checkin/out model seems
           | simpler for people to grasp. The virtual/branch systems most
           | of the other ones use is kind of a mental block for many
           | until they grok it.
        
           | marcosdumay wrote:
           | > No idea if that was true
           | 
           | It's an absurd understatement. The only people that seriously
           | used VSS and didn't see any corruption were the people that
           | didn't look at their code history.
        
           | smithkl42 wrote:
           | I used VSS for a few years back in the late 90's and early
           | 2000's. It was better than nothing - barely - but it was very
           | slow, very network intensive (think MS Access rather than
           | SQL), it had very poor merge primitives (when you checked out
           | a file, nobody else could change it), and yes, it was
           | exceedingly prone to corruption. A couple times we just had
           | to throw away history and start over.
        
             | electroly wrote:
             | SourceSafe had a great visual merge tool. You could enable
             | multiple checkouts. VSS had tons of real issues but not
             | enabling multiple checkouts was a pain that companies
             | inflicted on themselves. I still miss SourceSafe's merge
             | tool sometimes.
        
           | wvenable wrote:
           | I was mandated to use VSS in a university course in the late
           | 90s -- one course, one project -- and we still managed to
           | corrupt it.
        
         | larrywright wrote:
         | I used VSS in the 90s as well, it was a nightmare when working
         | in a team. As I recall, Microsoft themselves did not use VSS
         | internally, at least not for the majority of things.
        
           | hpratt4 wrote:
           | That's correct. Before SD, Microsoft orgs (at least Office
           | and Windows; I assume others too) used an internal tool
           | called SLM ("slime"); Raymond Chen has blogged about it, in
           | passing: https://devblogs.microsoft.com/oldnewthing/20180122-
           | 00/?p=97...
        
         | chiph wrote:
         | VSS was picked up via the acquisition of One Tree Software in
         | Raleigh. Their product was SourceSafe, and the "Visual" part
         | was added when it was bundled with their other developer tools
         | (Visual C, Visual Basic, etc). Prior to that Microsoft sold a
         | version control product called "Microsoft Delta" which was
         | expensive and awful and wasn't supported on NT.
         | 
         | One of the people who joined Microsoft via the acquisition was
         | Brian Harry, who led the development of Team Foundation Version
         | Control (part of Team Foundation Server - TFS) which used SQL
         | Server for its storage. A huge improvement in manageability and
         | reliability over VSS. I think Brian is retired now - his blog
         | at Microsoft is no longer being updated.
         | 
         | From my time using VSS, I seem to recall a big source of
         | corruption was it's use of network file locking over SMB. If
         | there were a network glitch (common in the day) you'd have to
         | repair your repository. We set up an overnight batch job to run
         | the repair so we could be productive in the mornings.
        
           | EvanAnderson wrote:
           | > ...I seem to recall a big source of corruption was it's use
           | of network file locking over SMB...
           | 
           | Shared database files (of any kind) over SMB... _shudder_
           | Those were such bad days.
        
       | ThinkBeat wrote:
       | What were the biggest hurdles? Where did Git fall short? How did
       | you structure the repo(s)? Where there many artifacts that went
       | into integration with GitLFS?
        
       | airstrike wrote:
       | _> Today, as I type these words, I work at Snowflake. Snowflake
       | has around ~2,000 engineers. When I was in Office, Office alone
       | was around ~4,000 engineers._
       | 
       | I'm sorry, what?! 4,000 engineers doing what, exactly?
       | 
       | Excel turns 40 this year and has changed very little in those
       | four decades. I can't imagine you need 4,000 engineers just to
       | keep it backwards compatible.
       | 
       | In the meantime we've seen entire companies built with a ragtag
       | team of hungry devs.
        
       | throwaway889900 wrote:
       | Thank goodness I don't have to use IBM's Rational Team Concert
       | anymore. Even just thinking about it makes me shudder.
        
         | mosdl wrote:
         | It was a great tool for losing changes!
        
       | danielodievich wrote:
       | I want to thank dev leads who trained this green-behind-the-ears
       | engineer on mysteries of Source Depot. Once I understood it, it
       | was quite illuminating. I am glad we only had a dependency on
       | WinCE and IE, and so the clone only took 20 minutes instead of
       | days. I don't remember your names but I remember your willingness
       | to step up and help and onboard new person so they could start
       | being productive. I pay this attitude forward with new hires here
       | in my team no matter where I go.
        
       | b0a04gl wrote:
       | funny how most folks remember the git migration as a tech win but
       | honestly the real unlock was devs finally having control over
       | their own flow no more waiting on sync windows, no more asking
       | leads for branch access suddenly everyone could move fast without
       | stepping on each other that shift did more for morale than any
       | productivity dashboard ever could git didn't just fix tooling, it
       | fixed trust in the dev loop
        
       | bariumbitmap wrote:
       | > In the early 2000s, Microsoft faced a dilemma. Windows was
       | growing enormously complex, with millions of lines of code that
       | needed versioning. Git? Didn't exist. SVN? Barely crawling out of
       | CVS's shadow.
       | 
       | I wonder if Microsoft ever considered using BitKeeper, a
       | commercial product that began development in 1998 and had its
       | public release in 2000. Maybe centralized systems like Perforce
       | were the norm and a DVCS like BitKeeper was considered strange or
       | unproven?
        
         | wslh wrote:
         | There was SourceSafe (VSS) around that time and TFVC
         | afterwards.
        
       | jeffbee wrote:
       | One thing I find annoying about these Perforce hate stories: yes
       | it's awkward to branch in Perforce. It is also the case that
       | there is _no need_ to ever create a branch for feature
       | development when you use Perforce. It 's like complaining that it
       | is hard to grate cheese with a trumpet. That just isn't
       | applicable.
        
       ___________________________________________________________________
       (page generated 2025-06-12 23:00 UTC)