机器学习算法与Python实战 | 三万字详解!GPT-5:你需要知道的一切(引用链接)值得学习!

本文来源公众号“机器学习算法与Python实战”,仅用于学术分享,侵权删,干货满满。

原文链接:三万字详解!GPT-5:你需要知道的一切

作者:Alberto Romero (青稞AI整理)

原文:https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know

这篇超长的文章(部分是评论,部分是探索)是关于 GPT-5 的。我们分为上机器学习算法与Python实战 | 三万字详解!GPT-5:你需要知道的一切(上)建议收藏!和下机器学习算法与Python实战 | 三万字详解!GPT-5:你需要知道的一切(下)建议收藏!两部分。但它的内容远不止于此。它讲述了我们对下一代人工智能模型的期望。它讲述了即将出现的令人兴奋的新功能(如推理和代理)。它讲述了 GPT-5 技术和 GPT-5 产品。它讲述了 OpenAI 面临的竞争业务压力以及其工程师面临的技术限制。它讲述了所有这些事情——这就是为什么它有那么长。

引用链接

[1] GPT-3: https://arxiv.org/abs/2005.14165
[2] LaMDA : https://blog.google/technology/ai/lamda/
[3] OPT : https://ai.meta.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/
[4] MT-NLG : https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/
[5] GPT-4 : https://openai.com/research/gpt-4
[6] 比尔盖茨等业内人士: https://the-decoder.com/bill-gates-does-not-expect-gpt-5-to-be-much-better-than-gpt-4/
[7] GPT-5 类模型: https://www.thealgorithmicbridge.com/i/143486801/the-gpt-class-of-models
[8] GPT-5 还是 GPT-4.5?: https://www.thealgorithmicbridge.com/i/143486801/gpt-or-gpt
[9] GPT品牌陷阱: https://www.thealgorithmicbridge.com/i/143486801/the-gpt-brand-trap
[10] OpenAI 何时发布 GPT-5?: https://www.thealgorithmicbridge.com/i/143486801/when-will-openai-release-gpt
[11] GPT-5 会有多好?: https://www.thealgorithmicbridge.com/i/143486801/how-good-will-gpt-be
[12] OpenAI 的目标如何塑造 GPT-5: https://www.thealgorithmicbridge.com/i/143486801/how-openais-goals-shape-gpt
[13] GPT-5 和缩放定律的统治: https://www.thealgorithmicbridge.com/i/143486801/gpt-and-the-ruling-of-the-scaling-laws
[14] 模型大小: https://www.thealgorithmicbridge.com/i/143486801/model-size
[15] 数据集大小: https://www.thealgorithmicbridge.com/i/143486801/dataset-size
[16] 计算: https://www.thealgorithmicbridge.com/i/143486801/compute
[17] 我对 GPT-5 大小的估计: https://www.thealgorithmicbridge.com/i/143486801/my-estimate-for-gpt-s-size
[18] GPT-5 的算法突破: https://www.thealgorithmicbridge.com/i/143486801/algorithmic-breakthroughs-in-gpt
[19] 多模态: https://www.thealgorithmicbridge.com/i/143486801/multimodality
[20] 机器人: https://www.thealgorithmicbridge.com/i/143486801/robotics
[21] 推理: https://www.thealgorithmicbridge.com/i/143486801/reasoning
[22] 个性化: https://www.thealgorithmicbridge.com/i/143486801/personalization
[23] 可靠性: https://www.thealgorithmicbridge.com/i/143486801/reliability
[24] 代理: https://www.thealgorithmicbridge.com/i/143486801/agents
[25] Meta Llama 3 405B也是 GPT-4 级: https://ai.meta.com/blog/meta-llama-3/
[26] 性能而言,这三款产品都差不多: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
[27] GPT-4 涡轮升级之后: https://twitter.com/OpenAI/status/1777772582680301665
[28] 不再如此: https://twitter.com/lmsysorg/status/1778555678174663100
[29] 1: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-1-143486801
[30] Gemini Advanced (带有 1.0 Ultra 后端): https://blog.google/products/gemini/bard-gemini-advanced-app/
[31] Gemini 1.5 : https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/
[32] 它已经是 GPT-4 级的了: https://twitter.com/OriolVinyalsML/status/1782780613537178105
[33] 考虑到1.0 Pro 和 1.0 Ultra 之间: https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf
[34] 构建模型的团队: https://www.theinformation.com/articles/googles-demis-hassabis-chafes-under-new-ai-push
[35] 谷歌经常失败的营销部分的团队: https://www.thealgorithmicbridge.com/p/google-gemini-anti-whiteness-disaster
[36] GPT-4.5 已泄露: https://the-decoder.com/openais-gpt-4-5-turbo-leaked-on-search-engines-and-could-launch-in-june/
[37] 消息: https://www.reddit.com/r/OpenAI/comments/1bd0l8b/gpt_45_turbo_confirmed/
[38] YOLO 运行: https://twitter.com/_jasonwei/status/1757486124082303073
[39] 他希望加倍进行迭代部署: https://youtu.be/jvqFAi7vkBc?t=3912
[40] 而不是一个棘手的演示: https://twitter.com/Google/status/1732467423654105330
[41] 2: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-2-143486801
[42] 自回归陷阱: https://twitter.com/random_walker/status/1683208798700449792
[43] 商标注册所: https://uspto.report/TM/98233550
[44] 因为将其锚定在过去而自我破坏自己的未来: https://www.thealgorithmicbridge.com/p/a-chatgpt-moment-for-everything
[45] Lex Fridman 采访了 Sam Altman : https://youtu.be/jvqFAi7vkBc
[46] GPT-5 的发布日期: https://youtu.be/jvqFAi7vkBc?t=3973
[47] 补充说: https://youtu.be/jvqFAi7vkBc?t=3992
[48] 还表示,: https://youtu.be/jvqFAi7vkBc?t=4018
[49] 不向世界发布令人震惊的更新: https://youtu.be/jvqFAi7vkBc?t=3926
[50] 这甚至可以解释最新的 GPT-4 turbo 版本(4 月 9 日): https://twitter.com/OpenAI/status/1777772582680301665
[51] 消息人士称,OpenAI 预计将在年中为其聊天机器人发布‘实质性改进’的 GPT-5 : https://archive.is/k2SuH
[52] 据两位知情人士透露,由Sam Altman: https://archive.is/o/k2SuH/https://www.businessinsider.com/openai-insiders-describe-sam-altmans-leadership-2023-12
[53] -4 于 2022 年 8 月完成训练: https://cdn.openai.com/papers/gpt-4.pdf#page=42
[54] 微软的 Bing Chat 已经在后台运行 GPT-4。Bing : https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI’s-GPT-4
[55] AI 驱动的政治宣传的先例,OpenAI 肯定不会那么鲁莽: https://www.wsj.com/politics/how-i-built-an-ai-powered-self-running-propaganda-machine-for-105-e9888705
[56] 3: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-3-143486801
[57] (: https://www.mk.co.kr/news/it/10924466
[58] 有点糟糕: https://youtu.be/jvqFAi7vkBc?t=2703
[59] 更聪明: https://youtu.be/jvqFAi7vkBc?t=5332
[60] 通过 Howie Xu: https://twitter.com/H0wie_Xu/status/1745657992459272423
[61] 告诉 Business Insider : https://archive.is/k2SuH
[62] 他说的是: https://youtu.be/jvqFAi7vkBc?t=2740
[63] 评估是有问题的: https://www.thealgorithmicbridge.com/i/141137119/the-virtues-of-an-independent-chatbot-arena
[64] SWE-bench: https://www.swebench.com/
[65] ARC : https://github.com/fchollet/ARC
[66] GPT-4 在 SWE-bench 上: https://twitter.com/jyangballin/status/1775114444370051582
[67] GPT-3 在 ARC 上: https://twitter.com/fchollet/status/1636054491480088823
[68] GPT-4 在 ARC 上: https://community.openai.com/t/gpt-4-and-the-arc-challenge/168955
[69] SAT、Bar、AP : https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1#gpt-4-has-a-shot-at-passing-the-cfa-exam-but-chatgpt-not-a-chance-1
[70] 没有被污染: https://twitter.com/cHHillee/status/1635790330854526981
[71] 非线性的“指数”缩放定律: https://arxiv.org/abs/2001.08361
[72] 1.8T 个参数: https://www.thealgorithmicbridge.com/p/gpt-4s-secret-has-been-revealed
[73] 参数数量只是: https://arxiv.org/abs/2203.15556
[74] 宣称的目标是 AGI : https://openai.com/blog/planning-for-agi-and-beyond
[75] 制造人们想要的东西: https://paulgraham.com/good.html
[76] 以前那么独家了: https://www.spglobal.com/marketintelligence/en/news-insights/latest-news-headlines/microsoft-further-diversifies-its-ai-bets-80641945
[77] 不得不放弃一个代号为“Arrakis”的项目: https://www.theinformation.com/articles/openai-dropped-work-on-new-arrakis-ai-model-in-rare-setback
[78] 2023 年中期那么严重: https://www.thealgorithmicbridge.com/p/the-gpu-shortage-has-forced-ai-companies
[79] 互联网数据短缺: https://archive.is/76W8c
[80] 数据中心短缺以及对: https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer
[81] 新算法的: https://www.thealgorithmicbridge.com/i/135959842/companies-looking-beyond-current-algorithms
[82] 一种经验形式的扩展定律: https://arxiv.org/abs/2001.08361
[83] DeepMind 将这些定律: https://towardsdatascience.com/a-new-ai-trend-chinchilla-70b-greatly-outperforms-gpt-3-175b-and-gopher-280b-408b9b4510
[84] 存在争议: https://arxiv.org/abs/2404.10102
[85] 奥尔特曼在 2023 年声称: https://www.youtube.com/watch?v=T5cPoNwO7II&feature=youtu.be
[86] 放弃规模,其中之一: https://twitter.com/gdb/status/1750558864469299622
[87] OpenAI 将 GPT-4 变成了多模态模型: https://www.thealgorithmicbridge.com/p/gpt-4s-secret-has-been-revealed
[88] 4: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-4-143486801
[89] Richard Sutton 在《苦涩的教训》中的建议: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
[90] 1.17 亿: https://www.makeuseof.com/gpt-models-explained-and-compared/
[91] 15 亿: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
[92] 1.75 亿: https://arxiv.org/abs/2005.14165
[93] 1.8 万亿: https://www.semianalysis.com/p/gpt-4-architecture-infrastructure
[94] 2-5T 参数: https://lifearchitect.ai/gpt-5/#summary
[95] GPT-5 早在 11 月就已开始训练: https://www.ft.com/content/dd9ba2f6-f509-42f0-8e97-4271c7b84ded
[96] 在一个月前仍在进行中,: https://archive.is/k2SuH
[97] 它仍在学习)。: https://ai.meta.com/blog/meta-llama-3/
[98] 12-13: https://archive.is/76W8c
[99] 万亿个 token: https://www.semianalysis.com/p/gpt-4-architecture-infrastructure
[100] 多达 100 万亿个 token 来改进它——如果他们找到: https://arxiv.org/pdf/2203.15556.pdf
[101] 收集这么多 token: https://twitter.com/ylecun/status/1750614681209983231
[102] 违反 YouTube 的服务条款: https://www.bloomberg.com/news/articles/2024-04-04/youtube-says-openai-training-sora-with-its-videos-would-break-the-rules
[103] 已经是一种常见的做法: https://www.ft.com/content/053ee253-820e-453a-a1d5-0f24985258de
[104] 耗尽: https://www.youtube.com/watch?v=ZPPBujNssnU
[105] 还不是 AGI : https://www.thealgorithmicbridge.com/i/142204815/the-agi-has-been-achieved-trap
[106] 2: https://www.databricks.com/blog/coreweave-nvidia-h100-part-1
[107] 4 倍: https://lambdalabs.com/blog/nvidia-h100-gpu-deep-learning-performance-analysis
[108] 他希望后者的效率提高 10 倍: https://youtu.be/1egAKCKPKCk?t=1511
[109] 2: https://lambdalabs.com/blog/flashattention-2-lambda-cloud-h100-vs-a100#h100-vs-a100-results
[110] 8 倍: https://developer.nvidia.com/blog/nvidia-hopper-architecture-in-depth/
[111] 并行配置: https://www.semianalysis.com/i/143439831/inference-parallelism-techniques-pipeline-parallelism-tensor-parallelism-expert-parallelism-and-data-parallelism
[112] 另一种可能性是,考虑到 OpenAI不断改进 GPT-4 ,: https://twitter.com/OpenAI/status/1777772582680301665
[113] ChatGPT 的使用率并没有增长: https://substack.com/@exponentialview/note/c-52677620
[114] 5: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-5-143486801
[115] 以下是我们可以期待的提示: https://www.reddit.com/r/OpenAI/comments/1bz6qwj/sam_altman_reveals_whats_next_for_ai/
[116] 6: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-6-143486801
[117] 7: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-7-143486801
[118] 多模态性还是一个梦想: https://www.thealgorithmicbridge.com/i/108431509/multimodality-the-first-good-multimodal-large-language-model
[119] 是他们无法承受的: https://www.thealgorithmicbridge.com/p/why-ai-is-doomed-without-neuroscience
[120] 人类实际上还有更多: https://www.newscientist.com/article/mg18524841-600-senses-special-doors-of-perception/
[121] 动物拥有而我们没有的那些模式: https://www.discovermagazine.com/planet-earth/the-5-senses-animals-have-that-humans-dont
[122] Voice Engine: https://openai.com/blog/navigating-the-challenges-and-opportunities-of-synthetic-voices
[123] 宣布了 Sora : https://openai.com/sora
[124] The Information 报道: https://www.theinformation.com/articles/googles-demis-hassabis-chafes-under-new-ai-push?rc=j0xnsg
[125] 艺术家: https://openai.com/blog/sora-first-impressions
[126] TED中测试第一印象: https://twitter.com/TEDTalks/status/1781351036877156452
[127] 其中可能包括 OpenAI Sora : https://www.theverge.com/2024/4/15/24130804/adobe-premiere-pro-firefly-video-generative-ai-openai-sora
[128] 与 Figure 的合作: https://www.businessinsider.com/openai-bets-big-on-humanoid-robots-with-figure-ai-2024-2
[129] 花哨演示: https://twitter.com/coreylynch/status/1767927194163331345
[130] 我最有信心但不太为人工智能圈接受的观点: https://towardsdatascience.com/artificial-intelligence-and-robotics-will-inevitably-merge-4d4cd64c3b02
[131] Michell 写了一篇关于一般智力的科学评论: https://www.science.org/doi/10.1126/science.ado7069
[132] 表明: https://barsaloulab.org/Online_Articles/2020-Barsalou-Jour_Cognition-challenges_opportunities.pdf
[133] 社会: https://www.science.org/doi/10.1126/science.1146282
[134] 文化: https://doi.org/10.1017/S0140525X21001710
[135] 放弃它: https://venturebeat.com/business/openai-disbands-its-robotics-research-team/
[136] 视频生成将通过模拟一切而导致 AGI : https://twitter.com/agihouse_org/status/1776827897892024734
[137] 莫拉维克悖论: https://en.wikipedia.org/wiki/Moravec's_paradox
[138] 莫拉维克悖论: https://en.wikipedia.org/wiki/Moravec's_paradox
[139] 任务: https://twitter.com/jyangballin/status/1775114444370051582
[140] 问题: https://community.openai.com/t/gpt-4-and-the-arc-challenge/168955
[141] 流体智力: https://en.wikipedia.org/wiki/Fluid_and_crystallized_intelligence
[142] 以极其有限的方式”进行推理,用 Altman 的话来说。(在: https://youtu.be/PkXELH6Y2lM?t=315
[143] MMLU: https://arxiv.org/pdf/2009.03300v3.pdf
[144] BIG-bench: https://github.com/google/BIG-bench
[145] 抽样可以证明知识的存在,但不能证明知识的缺失: https://gwern.net/gpt-3-nonfiction#common-sense-knowledge
[146] ARC 挑战等问题上的绝对: https://github.com/fchollet/ARC
[147] 8: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-8-143486801
[148] DeepMind 的 AlphaGo Zero就是以 100-0: https://deepmind.google/discover/blog/alphago-zero-starting-from-scratch/
[149] AlphaGo: https://deepmind.google/technologies/alphago/
[150] DeepMind 已经测试了这种方法: https://arxiv.org/abs/2107.05407
[151] Bengio: https://youtu.be/T3sxeTgT4qc
[152] Yann LeCun: https://youtu.be/vyqXLJsmsrk
[153] AlphaZero: https://deepmind.google/discover/blog/alphazero-shedding-new-light-on-chess-shogi-and-go/
[154] MuZero : https://deepmind.google/discover/blog/muzero-mastering-go-chess-shogi-and-atari-without-rules/
[155] 已经有了这方面的一些成果: https://www.thealgorithmicbridge.com/i/133160725/gemini-a-multimodal-chatgpt-alphago
[156] 对 Q* 的猜测: https://www.theinformation.com/articles/openai-made-an-ai-breakthrough-before-altman-firing-stoking-excitement-and-concern?rc=j0xnsg
[157] 说: https://twitter.com/polynoamial/status/1676971503261454340
[158] 最近在红杉资本 (Sequoia) 的一次演讲: https://youtu.be/c3b-JASoPi0
[159] Shane Legg: https://youtu.be/qulfo6-54k0
[160] 著名的第 37 步: https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/
[161] 搜索能力引入 LLM: https://github.com/spcl/graph-of-thoughts
[162] 在游戏中推广自我对弈: https://www.science.org/doi/10.1126/sciadv.adg3256
[163] Yann LeCun 表示,: https://twitter.com/ylecun/status/1728126868342145481
[164] Altman 在董事会闹剧中被解雇前一天关于 Q* 的言论的批评: https://youtu.be/ZFFvqRemDv8?t=805
[165] 的评论表明,: https://twitter.com/futuristflower/status/1778029932490166613
[166] 最新的 GPT-4 turbo 版本: https://twitter.com/OpenAI/status/1777772582680301665
[167] Information 报道称: https://www.theinformation.com/articles/openai-made-an-ai-breakthrough-before-altman-firing-stoking-excitement-and-concern
[168] 9: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-9-143486801
[169] Magic : https://twitter.com/Justin_Halford_/status/1776864908950348268
[170] 谷歌: https://arxiv.org/html/2404.07143v1
[171] Meta 的新研究: https://arxiv.org/abs/2404.08801
[172] Ask My Life : https://twitter.com/amasad/status/1777016914763817061
[173] 获得报酬的主要原因之一: https://twitter.com/abacaj/status/1773485186270814319
[174] 增长停滞: https://substack.com/@exponentialview/note/c-52677620
[175] 使用停滞的原因: https://archive.is/5MhEo
[176] 有趣的消遣: https://twitter.com/rbhar90/status/1772052483965153453
[177] 不是提高生产力: https://twitter.com/fchollet/status/1772069855912747406
[178] 并不总是很顺利: https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
[179] Anthropic 的作品。): https://www.anthropic.com/news/decomposing-language-models-into-understandable-components
[180] 抽样可以证明知识的存在,但不能证明知识的缺失: https://gwern.net/gpt-3-nonfiction#common-sense-knowledge
[181] 越狱: https://www.anthropic.com/research/many-shot-jailbreaking
[182] 对抗性攻击: https://web.stanford.edu/class/cs329t/slides/llm_attacks.pdf
[183] 即时注入方面是完全可靠或完全安全的: https://www.lesswrong.com/posts/bNCDexejSZpkuu3yz/you-can-use-gpt-4-to-create-prompt-injections-against-gpt-4
[184] GPT-3 → GPT-4 的轨迹表明他们会的: https://twitter.com/emollick/status/1772327253872988513
[185] 10: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-10-143486801
[186] 11: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-11-143486801
[187] BabyAGI: https://github.com/yoheinakajima/babyagi
[188] AutoGPT: https://github.com/Significant-Gravitas/AutoGPT
[189] 失败的自主性实验: https://futurism.com/business-chatgpt-green-gadget-guru-fate
[190] Gwern 在 AI 工具与 AI 代理二分法方面有很好的资源: https://gwern.net/tool-ai
[191] 12: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-12-143486801
[192] 十三: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-13-143486801
[193] 14: https://www.thealgorithmicbridge.com/p/gpt-5-everything-you-need-to-know#footnote-14-143486801
[194] 将通过模拟一切实现 AGI : https://twitter.com/agihouse_org/status/1776827897892024734
[195] Voice Engine: https://openai.com/blog/navigating-the-challenges-and-opportunities-of-synthetic-voices
[196] Suno: https://suno.com/
[197] Figure 01 : https://www.figure.ai/
[198] 视频输入,轨迹输出: https://twitter.com/adcock_brett/status/1743987597301399852
[199] Voyager : https://voyager.minedojo.org/
[200] AlphaGo: https://deepmind.google/technologies/alphago/
[201] AlphaZero: https://deepmind.google/discover/blog/alphazero-shedding-new-light-on-chess-shogi-and-go/
[202] 毫不怀疑:: https://www.youtube.com/watch?t=668&v=GI4Tpi48DlA&feature=youtu.be
[203] 在最近的一次演讲中更进一步: https://twitter.com/agihouse_org/status/1776827897892024734
[204] 预知问题: https://en.wikipedia.org/wiki/Precognition
[205] 无论如何,如今人工智能公司的选择非常有限——尽管 Yann LeCun一直在: https://openreview.net/pdf?id=BZ5a1r-kVsf
[206] 尝试: https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/
[207] ,: https://www.thealgorithmicbridge.com/p/act-1-how-adept-is-building-the-future
[208] 一些演示: https://www.adept.ai/blog/act-1
[209] 两位联合创始人离开了公司: https://www.theinformation.com/briefings/two-co-founders-of-adept-an-openai-rival-suddenly-left-to-start-another-company
[210] The Information 报道称: https://www.theinformation.com/articles/to-unlock-ai-spending-microsoft-openai-and-google-prep-agents?rc=j0xnsg
[211] Devin的创造者: https://twitter.com/cognition_labs/status/1767548763134964000
[212] OpenDevin : https://github.com/OpenDevin/OpenDevin
[213] 揭穿 Devin 的真面目: https://youtu.be/tNmgmwEtoWE
[214] 接手繁琐的 Upwork 任务赚钱: https://youtu.be/UTS2Hz96HYQ
[215] Rabbit R1: https://www.rabbit.tech/
[216] Humane AI Pin。R1: https://humane.com/
[217] 即将发布: https://twitter.com/jessechenglyu/status/1780656156144496924
[218] 的 Ben Newhouse 表示,: https://twitter.com/newhouseb/status/1750631406043320391
[219] Newhouse的上述引言: https://twitter.com/newhouseb/status/1750631406043320391

THE END !

文章结束,感谢阅读。您的点赞,收藏,评论是我继续更新的动力。大家有推荐的公众号可以评论区留言,共同学习,一起进步。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值