{"id":10935,"date":"2020-09-09T16:09:00","date_gmt":"2020-09-09T23:09:00","guid":{"rendered":"https:\/\/BigJimIndustries.com\/wordpress\/?p=10935"},"modified":"2020-09-14T16:15:54","modified_gmt":"2020-09-14T23:15:54","slug":"gbs-3","status":"publish","type":"post","link":"https:\/\/bigjimindustries.com\/wordpress\/2020\/09\/09\/gbs-3\/","title":{"rendered":"GBS-3"},"content":{"rendered":"\n<p><em><a href=\"http:\/\/nautil.us\/issue\/89\/the-dark-side\/welcome-to-the-next-level-of-bullshit?mc_cid=8b6f026f4e&amp;mc_eid=729deb45d5\" target=\"_blank\" rel=\"noreferrer noopener\">from NAUTILUS<\/a><\/em><\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Welcome to the Next Level of Bullshit<\/h1>\n\n\n\n<p>The language algorithm GPT-3 continues our descent into a post-truth world.<\/p>\n\n\n\n<p>BY RAPHA\u00cbL MILLI\u00c8RE<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"160\" src=\"https:\/\/BigJimIndustries.com\/wordpress\/wp-content\/Screen-Shot-2020-09-14-at-16.10.04-1.jpg\" alt=\"\" class=\"wp-image-10938\" srcset=\"https:\/\/bigjimindustries.com\/wordpress\/wp-content\/uploads\/Screen-Shot-2020-09-14-at-16.10.04-1.jpg 500w, https:\/\/bigjimindustries.com\/wordpress\/wp-content\/uploads\/Screen-Shot-2020-09-14-at-16.10.04-1-300x96.jpg 300w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n\n\n\n<p>One of the most salient features of our culture is that there is so much bullshit.\u201d These are the opening words of the short book\u00a0<em>On Bullshit<\/em>, written by the philosopher Harry Frankfurt. Fifteen years after the publication of this surprise bestseller, the rapid progress of research on artificial intelligence is forcing us to reconsider our conception of bullshit as a hallmark of human speech, with troubling implications. What do philosophical reflections on bullshit have to do with algorithms? As it turns out, quite a lot.<\/p>\n\n\n\n<p>In May this year the company OpenAI, co-founded by Elon Musk in 2015, introduced a new language model called GPT-3 (for \u201cGenerative Pre-trained Transformer 3\u201d). It took the tech world by storm. On the surface, GPT-3 is like a supercharged version of the autocomplete feature on your smartphone; it can generate coherent text based on an initial input. But GPT-3\u2019s text-generating abilities go far beyond anything your phone is capable of. It can disambiguate pronouns, translate, infer, analogize, and even perform some forms of common-sense reasoning and arithmetic. It can generate fake news articles that humans can barely detect above chance. Given a definition, it can use a made-up word in a sentence. It can rewrite a paragraph in the style of a famous author. Yes, it can write creative fiction. Or generate code for a program based on a description of its function. It can even answer queries about general knowledge. The list goes on.<\/p>\n\n\n\n<p>GPT-3 is a marvel of engineering due to its breathtaking scale. It contains 175 billion parameters (the weights in the connections between the \u201cneurons\u201d or units of the network) distributed over 96 layers. It produces embeddings in a vector space with 12,288 dimensions. And it was trained on hundreds of billions of words representing a significant subset of the Internet\u2014including the entirety of English Wikipedia, countless books, and a dizzying number of web pages. Training the final model alone is estimated to have cost around $5 million. By all accounts, GPT-3 is a behemoth. Scaling up the size of its network and training data, without fundamental improvements to the years-old architecture, was sufficient to bootstrap the model into unexpectedly remarkable performance on a range of complex tasks, out of the box. Indeed GPT-3 is capable of \u201cfew-shot,\u201d and even, in some cases, \u201czero-shot,\u201d learning, or learning to perform a new task without being given any example of what success looks like.<\/p>\n\n\n\n<p>Interacting with GPT-3 is a surreal experience. It often\u00a0<em>feels<\/em>\u00a0like one is talking to a human with beliefs and desires. In the 2013 movie\u00a0<em>Her<\/em>, the protagonist develops a romantic relationship with a virtual assistant, and is soon disillusioned when he realizes that he was projecting human feelings and motivations onto \u201cher\u201d alien mind. GPT-3 is nowhere near as intelligent as the film\u2019s AI, but it could still find its way into our hearts. Some tech startups like\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/replika.ai\/\" target=\"_blank\">Replika<\/a>\u00a0are already working on creating AI companions molded on one\u2019s desired characteristics. There is no doubt that many people would be\u00a0<a rel=\"noreferrer noopener\" href=\"http:\/\/nautil.us\/issue\/33\/attraction\/your-next-new-best-friend-might-be-a-robot\" target=\"_blank\">prone to anthropomorphize even a simple chatbot<\/a>\u00a0built with GPT-3. One wonders what consequences this trend might have in a world where social-media interactions with actual humans have already been found to increase social isolation.<\/p>\n\n\n\n<p>[ <a href=\"http:\/\/nautil.us\/issue\/89\/the-dark-side\/welcome-to-the-next-level-of-bullshit?mc_cid=8b6f026f4e&amp;mc_eid=729deb45d5\" target=\"_blank\" rel=\"noreferrer noopener\">click to continue reading at NAUTILUS<\/a> ]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>from NAUTILUS Welcome to the Next Level of Bullshit The language algorithm GPT-3 continues our descent into a post-truth world. BY RAPHA\u00cbL MILLI\u00c8RE One of the most salient features of our culture is that there is so much bullshit.\u201d These are the opening words of the short book\u00a0On Bullshit, written by the philosopher Harry Frankfurt. [&hellip;]<\/p>\n","protected":false},"author":26,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[8],"tags":[],"class_list":["post-10935","post","type-post","status-publish","format-standard","hentry","category-weirdness"],"_links":{"self":[{"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/posts\/10935","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/comments?post=10935"}],"version-history":[{"count":0,"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/posts\/10935\/revisions"}],"wp:attachment":[{"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/media?parent=10935"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/categories?post=10935"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bigjimindustries.com\/wordpress\/wp-json\/wp\/v2\/tags?post=10935"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}