[HN Gopher] Ask HN: Recommendation for a SWE looking to get up t...
___________________________________________________________________
Ask HN: Recommendation for a SWE looking to get up to speed with
latest on AI
I am looking to get up to speed with the latest things happening in
AI, I use ChatGPT almost everyday and i last used the open AI api
for 3.5 last year. I am looking for a tech blogs like HN to keep
updated on things AI, I came across https://simonwillison.net/ but
it appears fragmented
Author : Rizu
Score : 189 points
Date : 2024-11-27 13:55 UTC (9 hours ago)
| drcwpl wrote:
| Simon's blog is excellent for an SWE
|
| For a general audience - https://www.ai-
| supremacy.com/?utm_source=substack&utm_medium...
|
| Fromm inside the AI Labs - https://aligned.substack.com/
|
| https://milesbrundage.substack.com/
|
| for swe - https://artificialintelligencemadesimple.substack.com/
| drcwpl wrote:
| also
|
| https://magazine.sebastianraschka.com/p/understanding-multim...
| toddwprice wrote:
| Subscribe to The Neuron newsletter
| sghiassy wrote:
| https://join.theneurondaily.com/
| Maro wrote:
| I don't think it's a good idea to kepp up to date at a
| daily/weekly cadence, unless you somehow directly get paid for
| it. It's like checking stocks daily, it doesn't lead to good
| investment decisions.
|
| It's better to do it more batchy, like once every 6-12 months or
| so.
| Luc wrote:
| How do you do that? Once you're out of the loop for half a
| year, it becomes harder to know what's important and what's
| not, I think.
| pilotneko wrote:
| Every release is novel. Once something has been around for a
| while and is still being referenced, you know it's worth
| learning.
|
| Waiting 3-6 months to take a deep dive is a good pattern to
| prevent investing your time in dead-end routes.
| SoftTalker wrote:
| Yes this is why I never buy the latest CPUs and try to
| never run the latest release of any software. Stay a
| (supported) release or two behind the bleeding edge, and
| you'll find stuff is more stable. Common bugs and other
| issues have been shaken out by the early adopters.
| Maro wrote:
| Some ideas:
|
| 1. Buy O'reilly (and other tech) books as they come out. This
| will have a lag, but essentially somebody did this research &
| summarization work, and wrote it up for you in chapters. Note
| that you don't have to read everything in a book. Also, $50
| is a great investment if it saves you 10s of hours of time.
|
| 2. Talks on Youtube at conferences by industry leaders, like
| Yann LeCun, or maintainers of popular libraries, etc. Also,
| YT videos on the topic that are upvoted/linked.
|
| 3. If you're interested in hardcore research, look for review
| articles on arxiv.
|
| 4. Look at tutorials/examples in the documentation/repo of
| popular ML/AI libraries, like Pytorch.
|
| 5. Try to cover your blindspots. One way or another, you'll
| know how new AI is applied to SWE and related fields. But how
| is AI applied to perpendicular fields, like designing
| buildings, composing music, or balancing a budget? Trying to
| cover these areas will be tougher, because it will be more
| noisy, as most commenters will be non-experts compared to
| you. To get a feel for this, do something that feels
| unnatural, like watch TED talks that seem bullshity, read HBR
| articles intended for MBAs, and check out what Palantir is
| doing.
| swyx wrote:
| my conference is currently run on a 6 month batch
| https://www.youtube.com/@aidotengineer
|
| and is curated by me/my team. hope that helps people keep up
| on the video/talk-length form factor (as in, instead of
| books, though we also have 2-3 hour workshops)
| pdevine wrote:
| The poster's looking for articles, so this recommendation's a bit
| off the mark. I learned more from participating in a few Kaggle
| competitions (https://www.kaggle.com/competitions) than I did
| from reading about AI. Many folks in the community shared their
| homework, and by learning how to follow their explanations I
| developed a much more intuitive understanding of the technology.
| The first competition had a steep learning curve. I felt it was
| worth it. The application of having a specific goal and the
| provided datasets made the problem space more tractable.
| rpastuszak wrote:
| Out of sheer curiosity, how much time did you spend on it on
| average? How much of this knowledge are you using now?
| hzay wrote:
| Not the poster you responded to but I learned quite a bit
| from kaggle too.
|
| I started from scratch, spent 2-4 hrs per day for 6 months &
| won a silver in a kaggle NLP competition. Now I use some of
| it now but not all of it. More than that, I'm quite
| comfortable with models, understand the
| costs/benefits/implications etc. I started with Andrew Ng's
| intro courses, did a bit of fastai, did Karpathy's Zero to
| Hero fully, all of Kaggle's courses & a few other such
| things. Kagglers share excellent notebooks and I found them v
| helpful. Overall I highly recommend this route of learning.
| solardev wrote:
| Thanks for the detailed reply!
| Foobar8568 wrote:
| I was playing also on kaggle a few years back, similar
| feedback.
| swyx wrote:
| i mean yes but also how much does kaggling/traditional ML
| path actually prepare you for the age of closed model labs
| and LLM APIs?
|
| im not even convinced kaggling helps you interview at an
| openai/anthropic (its not a negative, sure, but idk if itd
| be what theyd look for for a research scientist role)
| hzay wrote:
| I learned ML only to satisfy my curiosity, so I don't
| know if it's useful for interviewing. :)
|
| Now when I read a paper on something unrelated to AI
| (idk, say progesterone supplements), and they mention a
| random forest, I know what they're talking about. I
| understand regression, PCA, clustering, etc. When I
| trained a few transformer models (not pretrained) on my
| native language texts, I was shocked by how rapidly they
| learn connotations. I find transformer-based LLMs to be
| very useful, yes, but not unsettlingly AGI-like, as I did
| before learning about them. I understand the usual way of
| building recommender systems, embeddings and things.
| Image models like Unets, GANs etc were very cool too, and
| when your own code produces that magical result, you see
| the power of pretraining + specialization. So yeah, idk
| what they do in interviews nowadays but I found my
| education very fruitful. It was how I felt when I first
| picked up programming.
|
| Re the age of LLMs, it is precisely because LLMs will be
| ubiquitous I wanted to know how they work. I felt
| uncomfortable treating them as black boxes that you don't
| understand technically. Think about the people who don't
| know simple things about a web browser, like opening dev
| tools and printing the auth token or something. It's not
| great to be in that place.
| jumping_frog wrote:
| Some youtube channels are good too.
|
| https://www.youtube.com/@umarjamilai
|
| https://huyenchip.com/blog/
| barrenko wrote:
| Get on Twitter (well, X) as that's where the the cutting edge is.
| AlphaWeaver wrote:
| As I was building up my understanding/intuition for the internals
| of transformers + attention, I found 3Blue1Brown's series of
| videos (specifically on attention) to be super helpful.
| galangalalgol wrote:
| This has been good for me, but it is more foundation than what
| is the latest. https://www.mattprd.com/p/openai-
| cofounder-27-papers-read-kn...
| adroitboss wrote:
| The best place for the latest information isn't tech blogs in my
| opinion. It's the stable diffusion and local llama subreddits. If
| you are looking to learn about everything on a fundamental level
| you need to check out Andrej Karpathy on YouTube. There other
| some other notable mentions in other people's comments.
| bingemaker wrote:
| Being a coder, I find these resources extremely useful:
|
| Github blog: https://github.blog/ai-and-ml/ Cursor blog:
| https://www.cursor.com/blog
| zellyn wrote:
| Simon's blog is fragmented because it's, well, a blog. It would
| be hard to find a better source to "keep updated on things AI"
| though. He does do longer summary articles sometimes, but mostly
| he's keeping up with things in real time. The search and tagging
| systems on his blog work well, too. I suggest you stick his RSS
| feed in your feed reader, and follow along that way.
|
| Swyx also has a lot of stuff keeping up to date at
| https://www.latent.space/, including the Latent Space podcast,
| although tbh I haven't listened to more than one or two episodes.
| swyx wrote:
| thanks! i also have a daily news recap here
| https://buttondown.email/ainews/archive/
| petesergeant wrote:
| Read through this making flashcards as you to:
| https://eugeneyan.com/writing/llm-patterns/
|
| Then spin up a RAG-enhanced chatbot using pgvector on your
| favourite subject, and keep improving it when you learn about
| cool techniques
| nullandvoid wrote:
| YT channels:
|
| - https://www.youtube.com/@aiexplained-official -
| https://www.youtube.com/@DaveShap -
| https://www.youtube.com/@TwoMinutePapers/videos
|
| Then newsletter AI supremacy
| swyx wrote:
| daveshap quit ai right? got agi pilled/"oneshotted by
| ayahuasca" as the kids say
| mindcrime wrote:
| He was only gone for a few days, IIRC. At any rate, he's back
| publishing AI related content again, and it looks like all
| (?) of his old content is back on his YT channel.
| swyx wrote:
| honestly his channel quality is notably different than the
| other 2 you mentioned. i'm vaguely curious what you get out
| of it that makes you put him on the same tier.
| mindcrime wrote:
| I think you replied to the wrong person. I didn't put
| DaveShap on any tier or anything.
|
| That said... I will say that in one of my other replies I
| did mention that some YT channels in this space can be a
| bit tabloid'ish, and I may have had Shapiro partly in in
| mind when saying that. But I still subscribe to his
| channel and some similar ones, just to get a variety of
| takes and perspectives.
| eachro wrote:
| Reproduce nanogpt.
|
| Then find a small dataset and see if you can start getting close
| to some of the reported benchmark numbers with similar
| architectures.
| cranberryturkey wrote:
| checkout ollama. it lets you run open models on your own
| hardware. it also provides an easy to use rest api similar to
| openai's
| febin wrote:
| Build a tool on top of the LLM layer for a specific use case.
| That'll get you up to speed. You haven't missed much.
| magic_smoke_ee wrote:
| Exactly. Avoid intentionally throw-away effort and instead
| attempt to build something specific and practical. Learn by
| doing.
| Workaccount2 wrote:
| The localllama subreddit, although focused mostly on open source
| locally run models, still has ample discussion of SOTA models
| too.
|
| https://old.reddit.com/r/LocalLLaMA/
| Der_Einzige wrote:
| Sadly, you'll have to include 4chan /g/'s local models general,
| which, unfortunately, seems to have top AI researchers posting
| there (anonymously)
| not_your_vase wrote:
| Unpopular opinion: if you can't use Google nor ChatGPT to get an
| answer to this question, I have bad news for you.
| henry2023 wrote:
| Maybe you should read the responses here and acknowledge the
| value of a community.
| not_your_vase wrote:
| Maybe you should try google instead of being so
| condescending, and compare the first 2 pages' results with
| this page...
|
| We are not exactly talking about big secrets. We are talking
| about "llm learn resources" keywords - which apparently needs
| handholding in 2024. And "acknowledging the value of the
| community".
| simonw wrote:
| My blog is very high volume so yeah, it can be difficult to know
| where to look on it.
|
| I use tags a lot - these ones might be more useful for you:
|
| https://simonwillison.net/tags/prompt-engineering/ - collects
| notes on prompting techniques
|
| https://simonwillison.net/tags/llms/ - everything relating to
| LLMs
|
| https://simonwillison.net/tags/openai/ and
| https://simonwillison.net/tags/anthropic/ and
| https://simonwillison.net/tags/gemini/ and
| https://simonwillison.net/tags/llama/ and
| https://simonwillison.net/tags/mistral/ - I have tags for each of
| the major model families and vendors
|
| Every six months or so I write something (often derived from a
| conference talk) that's more of a "catch up with the latest
| developments" post - a few of those:
|
| - Stuff we figured out about AI in 2023 -
| https://simonwillison.net/2023/Dec/31/ai-in-2023/ - I will
| probably do one of those for 2024 next month
|
| - Imitation Intelligence, my keynote for PyCon US 2024 -
| https://simonwillison.net/2024/Jul/14/pycon/ from July this year
| gargigupta97 wrote:
| Unwind AI would be helpful. They publish daily newsletters on AI
| as well as tutorials on building apps with step-by-step
| walkthrough. Super focused on developers.
| https://www.theunwindai.com/
| notslow wrote:
| Machine Learning Mastery (https://machinelearningmastery.com)
| provides code examples for many of the popular models. For me,
| seeing and writing code has been helpful in understanding how
| things work and makes it easier to put new developments in
| context.
| bmitc wrote:
| Are you wanting to get into LLMs in particular or something else?
| I am a software engineer also trying to make headways into so-
| called "AI", but I have little interest in LLMs. For one, it's
| suffering from a major hype bubble right now. The second reason
| is that because of reason one, it has a huge amount of attention
| from people who study and work on this every day. It's not
| something I have the time commitment for to compete with that.
| Lastly, as mentioned, I have no interest in it and my
| understanding of them leads me to believe they have few
| interesting applications besides generating a huge amount of
| noise in society and dumping heat. The Internet, like blogs,
| articles, and even YouTube, are already being overrun by LLM-
| generated material that is effectively worthless. I'm not sure of
| the net positive for LLMs.
|
| For me personally, I prefer to work backwards and then forwards.
| What I mean by that is that I want to understand the basics and
| fundamentals first. So, I'm, slowly, trying to bone up on my
| statistics, probability, and information theory and have targeted
| machine learning books that also take a fundamental approach.
| There's no end to books in this realm for neural networks,
| machine learning, etc., so it's hard to recommend beyond what
| I've just picked, and I'm just getting started anyway.
|
| If you can get your employer to pay for it, MIT xPRO has courses
| on machine learning
| (https://xpro.mit.edu/programs/program-v1:xPRO+MLx/ and
| https://xpro.mit.edu/courses/course-v1:xPRO+GenAI/). These will
| likely give a pretty up to date overview of the technologies.
| danofsteel32 wrote:
| I recently wrote a post for a coworker who asked the exact same
| question.
|
| https://dandavis.dev/llm-knowledge-dump.html
| iamwil wrote:
| Lots of people can get impressive demos up and running, but if
| you want to run AI products in production, you're going to have
| to do system evals. System evals make sure your product is doing
| what it says on the box with unquantifiable qualities.
|
| We wrote a zine on system evals without jargon:
| https://forestfriends.tech
|
| Eugene Yan has written extensively on it
| https://eugeneyan.com/writing/evals/
|
| Hamel has as well. https://hamel.dev/blog/posts/evals/
| aaronrobinson wrote:
| What a goldmine of recommendations. I like Sam Witterveen's
| YouTube stuff for keeping up to speed
| https://m.youtube.com/@samwitteveenai
| fourside wrote:
| My issue with YouTube channels that focus on AI news is that
| they're heavily incentivized to give you a frequent stream of
| attention-grabbing news. Week-by-week updates aren't that
| helpful. It's easy to miss the bigger picture and there's too
| much content to feel like a good use of time.
| Rizu wrote:
| I agree with this statement, most YouTube channels are
| incentivized to keep repeating the same trivial information
| like how to compose prompts etc
| aaronrobinson wrote:
| Completely agree in general, but his are not that. Yes he
| talks about recent stuff but it's very considered and not
| attention or influence seeking IMO
| fallinditch wrote:
| New short course on FreeCodeCamp YouTube channel looks good -
|
| Ollama Course - Build AI Apps Locally
| https://youtu.be/GWB9ApTPTv4?feature=shared
|
| As an aside, does anyone have any ideas about this: there should
| be an app like an 'auto-RAG' that scrapes RSS feeds and URLs, in
| addition to ingesting docs, text and content in the normal RAG
| way. Then you could build AI chat-enabled knowledge resources
| around specific subjects. Autogenerated summaries and dashboards
| would provide useful overviews.
|
| Perhaps this already exists?
| A4ET8a8uTh0 wrote:
| << there should be an app like an 'auto-RAG' that scrapes RSS
| feeds and URLs,
|
| I am not aware if that exists yet, but the challenge I see with
| it is rather simple: you get overwhelmed with information
| really quickly. In other words, you would still need human
| somewhere in that process to review those scrapes and the
| quality of that varies widely. For example, even on HN it is
| not a given a link will be pure gold ( you still want to check
| if it fits your use case ).
|
| That said, as ideas goes, it sounds like a fun weekend project.
| be_erik wrote:
| I do exactly this with hoarder. I passively build tagged
| knowledge bases with the archived pages and then feed it to a
| RAG setup.
| swyx wrote:
| https://github.com/hoarder-app/hoarder for the mention
| fallinditch wrote:
| Cool. Hoarder looks interesting, thanks for the tip. How is
| it working out for you? Are you using the feature for auto
| hoarding RSS feeds?
| be_erik wrote:
| I am! It works great and it's reasonably easy to snapshot
| sites without RSS on a cron.
| JSDevOps wrote:
| First thing you need to do is change your LinkedIn to "AI
| evangelist" then go to your boss and say I want triple the pay.
| Then let the chips fall where they may. Oh also rename all your
| GitHub or personal projects to have AI in the name. You don't
| actually have to do much else.
| mavelikara wrote:
| I found video lectures of "Advanced NLP" course by Mohit Iyer
| very useful to get me started:
| https://people.cs.umass.edu/~miyyer/cs685/
| BillFranklin wrote:
| I read about 30 LLM papers a couple months ago dated from
| 2018-2024. Mostly folks are publishing on the "how do we prompt
| better" problem, and you can kind of get the gist in about a day
| by reading a few blogs (RAG, fine tuning, tool use, etc). There
| is also more progress being made for model capabilities, like
| multi modality, and each company seems to be pushing in only
| slightly different directions, but essentially they are still
| black boxes.
|
| It depends what you are looking for honestly "the latest things
| happening" is pretty vague. I'd say the place to look is probably
| just the blogs of OpenAI/Anthropic/Genini, since they are the
| only teams with inside information and novel findings to report.
| Everyone else is just using the tools we are given.
| mindcrime wrote:
| Lots of good suggestions here already. I'd start by adding one
| quick note though. "AI" is more than just LLM's. Sure, the
| "current, trendy, fashionable" thing is all LLM's, but the field
| as a whole is still much larger. I'd encourage you to not
| myopically focus on LLM's to exclusion. Depending on your
| existing background knowledge, there's a lot to be said for going
| out and getting a copy of _Artificial Intelligence: A Modern
| Approach_ and reading through it. Likewise for something like
| _Hands-On Machine Learning with Scikit-Learn, Keras, and
| Tensorflow_.
|
| Beyond that: there are some decent sub-reddits for keeping up
| with AI happenings, a lot of good Youtube channels (although a
| lot of the ones that talk about the "current, trendy" AI stuff
| tend to be a bit tabloid'ish), and even a couple of Facebook
| groups. You can also find good signal by choosing the right
| people to follow on Twitter/LinkedIn/Mastodon/Bluesky/etc.
|
| https://www.reddit.com/r/artificial/
|
| https://reddit.com/r/machineLearning/
|
| https://www.reddit.com/r/LLM/
|
| https://www.reddit.com/r/agi
|
| https://www.reddit.com/r/ollama/
|
| https://www.youtube.com/@matthew_berman
|
| https://www.youtube.com/@TheAiGrid
|
| https://www.youtube.com/@WesRoth
|
| https://www.youtube.com/@DaveShap
|
| https://www.youtube.com/c/MachineLearningStreetTalk
|
| https://www.youtube.com/@twimlai
|
| https://www.youtube.com/@YannicKilcher
|
| And you can always go straight to "the source" and follow pre-
| prints showing up in arXiv.
|
| https://arxiv.org/corr
|
| For tools to make it easier to track new releases, arXiv supports
| subscriptions to daily digest emails, and also has RSS feeds.
|
| https://info.arxiv.org/help/subscribe.html
|
| https://info.arxiv.org/help/rss.html
|
| There are also some bots in the Fediverse that push out links to
| new arXiv papers.
| senko wrote:
| I follow these:
|
| * Matt Berman on X / YT
|
| * AI-summarized AI news digest: https://buttondown.com/ainews by
| swyx
|
| * https://codingwithintelligence.com/about by Rick Lamers
|
| Then I manually follow up to learn more about specific topic/news
| I'm interested in.
| swyx wrote:
| thanks for following!
|
| i admire the youtubers a lot and often wonder if i should be
| venturing into that domain. youtube takes a lot of work but
| also has the greatest reach by far.
| throwup238 wrote:
| If you do please do it like PracticalEngineering with a full
| text transcript in article form.
| handzhiev wrote:
| For news-like content I follow accounts on X: @kimmonismus
| @apples_jimmy and the accounts of Antropic, Mistal, Gemini /
| DeepMind and OpenAI. I think everyone who is really interested in
| the hot AI developments must also follow what comes from China. I
| follow https://chinai.substack.com/ but I am open to hear about
| other Chinese resources.
| goosethe wrote:
| https://playground.tensorflow.org/ this is a classic which, imo,
| breaks it down to the simplest visuals.
| zackmorris wrote:
| LLMs and neural nets from first principles:
|
| https://arxiv.org/pdf/2404.17625 (pdf)
|
| https://news.ycombinator.com/item?id=40408880 (llama3
| implementation)
|
| https://news.ycombinator.com/item?id=40417568 (my comment on
| llama3 with breadcrumbs)
|
| Admittedly, I'm way behind on how this translates to software on
| the newest video cards. Part of that is that I don't like the
| emphasis on GPUs. We're only seeing the SIMD side of deep
| learning with large matrices and tensors. But there are at least
| a dozen machine learning approaches that are being neglected,
| mainly genetic algorithms. Which means that we're perhaps focused
| too much on implementations and not on core algorithms. It would
| be like trying to study physics without change of coordinates,
| Lorentz transformations or calculus. Lots of trees but no forest.
|
| To get back to rapid application development in machine learning,
| I'd like to see a 1000+ core, 1+ GHz CPU with 16+ GBs of core-
| local ram for under $1000 so that we don't have to manually
| transpile our algorithms to GPU code. That should have arrived
| around 2010 but the mobile bubble derailed desktop computing.
| Today it should be more like 10,000+ cores for that price at
| current transistor counts, increasing by a factor of about 100
| each decade by what's left of Moore's law.
|
| We also need better languages. Something like a hybrid of Erlang
| and Go with always-on auto-parallelization to run our human-
| readable but embarrassingly parallel code.
|
| Short of that, there might be an opportunity to write a
| transpiler that converts C-style imperative or functional code to
| existing GPU code like CUDA (MIMD -> SIMD). Julia is the only
| language I know of even trying to do this.
|
| Those are the areas where real work is needed to democratize AI,
| that SWEs like us may never be able to work on while we're too
| busy making rent. And the big players like OpenAI and Nvidia have
| no incentive to pursue them and disrupt themselves.
|
| Maybe someone can find a challenging profit where I only see
| disillusionment, and finally deliver UBI or at least stuff like
| 3D printed robots that can deliver the resources we need outside
| of a rigged economy.
| aanet wrote:
| Excellent thread! Love the responses.
|
| Is there a way to SAVE THIS THREAD on HN ? 'Cos I'd love that.
|
| Thx
| simpaticoder wrote:
| There is a favorite link on the original post. You can also
| save the content using a variety of methods, such as Pocket, or
| paste it into a tool like Obsidian or similar.
| mindcrime wrote:
| Yes, see here:
|
| https://fogbeam.com/hn_favorite.png
| tikkun wrote:
| It sounds like you want more broad stuff, not necessarily
| learning how to train models. More like learning to use them and
| how they work.
|
| https://news.ycombinator.com/item?id=36195527 and
|
| Hacker's Guide to LLMs by Jeremy from Fast.ai -
| https://www.youtube.com/watch?v=jkrNMKz9pWU
|
| State of GPT by Karpathy -
| https://www.youtube.com/watch?v=bZQun8Y4L2A
|
| LLMs by 3b1b - https://www.youtube.com/watch?v=LPZh9BOjkQs
|
| Visualizing transformers by 3b1b -
| https://www.youtube.com/watch?v=KJtZARuO3JY
|
| How ChatGPT trained - https://www.youtube.com/watch?v=VPRSBzXzavo
|
| AI in a nutshell - https://www.youtube.com/watch?v=2IK3DFHRFfw
|
| How Carlini uses LLMs -
| https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
|
| For staying updated:
|
| X/Twitter & Bluesky. Go and follow people that work at OpenAI,
| Anthropic, Google DeepMind, and xAI.
|
| Podcasts: No Priors, Generally Intelligent, Dwarkesh Patel,
| Sequoia's "Training Data"
| jayalammar wrote:
| We actually just wrote a book with your profile in mind --
| especially if by "AI" you're especially interested in LLMs and if
| you're a visual learner. It's called Hands-On Large Language
| Models and it contains 300 original figures explaining the main
| couple hundred intuitions and applications for these models. You
| can also read it online on the O'Reilly platform. I find that
| after acquiring the main intuitions, people find it much easier
| to move on to code implementations or papers.
| ketanmaheshwari wrote:
| https://a16z.com/ai-canon/
| explaingarlic wrote:
| So I'm currently using "OpenCV University"'s playlist on YouTube
| to get myself up to speed with computer vision, and this has lead
| into a spiraling staircase down into the depths of CNNs.
|
| Started off here:
| https://www.youtube.com/watch?v=hZWgEPOVnuM&list=PL6e-Bu0cqf...
|
| Ended up here:
| https://www.youtube.com/watch?v=_5XYLA2HLmo&list=PL6e-Bu0cqf...
|
| And after that, I've had some recent projects that I love to mess
| around with such as a better license plate detection API than
| what currently exists for U.K. plates, and once I completed those
| two courses I had a good enough baseline to work from where I'd
| encounter a repository and google around if I needed to learn
| something new.
|
| Short, simple, not painful etc. and I don't have the advanced
| mathematical background (nor the background within the American
| mathematical notation) that I'd need to digest the MIT course
| set, so this learning path has been the best for me. I'm no
| expert whatsoever, though.
| brcmthrowaway wrote:
| Who else bookmarked this Ask HN thread never to revisit?
___________________________________________________________________
(page generated 2024-11-27 23:02 UTC)