{"id":49675,"date":"2024-12-03T10:01:11","date_gmt":"2024-12-03T02:01:11","guid":{"rendered":"https:\/\/fwq.ai\/blog\/49675\/"},"modified":"2024-12-03T10:01:11","modified_gmt":"2024-12-03T02:01:11","slug":"%e5%88%9b%e5%bb%ba-llm-%e4%bb%a5%e5%9c%a8-python-%e4%b8%ad%e4%bd%bf%e7%94%a8%e5%bc%a0%e9%87%8f%e6%b5%81%e8%bf%9b%e8%a1%8c%e6%b5%8b%e8%af%95","status":"publish","type":"post","link":"https:\/\/fwq.ai\/blog\/49675\/","title":{"rendered":"\u521b\u5efa LLM \u4ee5\u5728 Python \u4e2d\u4f7f\u7528\u5f20\u91cf\u6d41\u8fdb\u884c\u6d4b\u8bd5"},"content":{"rendered":"<p><b><\/b>     <\/p>\n<h1>\u521b\u5efa LLM \u4ee5\u5728 Python \u4e2d\u4f7f\u7528\u5f20\u91cf\u6d41\u8fdb\u884c\u6d4b\u8bd5<\/h1>\n<p><span style=\"font-size: 15px\">\u5b66\u4e60\u77e5\u8bc6\u8981\u5584\u4e8e\u601d\u8003\uff0c\u601d\u8003\uff0c\u518d\u601d\u8003\uff01\u4eca\u5929\u7c73\u4e91\u5c0f\u7f16\u5c31\u7ed9\u5927\u5bb6\u5e26\u6765<span style=\"color: #FF6600;, Helvetica, Arial, sans-serif;font-size: 14px;background-color: #FFFFFF\">\u300a\u521b\u5efa LLM \u4ee5\u5728 Python \u4e2d\u4f7f\u7528\u5f20\u91cf\u6d41\u8fdb\u884c\u6d4b\u8bd5\u300b<\/span>\uff0c\u4ee5\u4e0b\u5185\u5bb9\u4e3b\u8981\u5305\u542b<span style=\"color: #FF6600;, Helvetica, Arial, sans-serif;font-size: 14px;background-color: #FFFFFF\"><\/span>\u7b49\u77e5\u8bc6\u70b9\uff0c\u5982\u679c\u4f60\u6b63\u5728\u5b66\u4e60\u6216\u51c6\u5907\u5b66\u4e60<span style=\"color: #FF6600;, Helvetica, Arial, sans-serif;font-size: 14px;background-color: #FFFFFF\">\u6587\u7ae0<\/span>\uff0c\u5c31\u90fd\u4e0d\u8981\u9519\u8fc7\u672c\u6587\u5566~\u8ba9\u6211\u4eec\u4e00\u8d77\u6765\u770b\u770b\u5427\uff0c\u80fd\u5e2e\u52a9\u5230\u4f60\u5c31\u66f4\u597d\u4e86\uff01<\/span><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.17golang.com\/uploads\/20241031\/173037439967236aff76a8c.jpg\" class=\"aligncenter\" title=\"\u521b\u5efa LLM \u4ee5\u5728 Python \u4e2d\u4f7f\u7528\u5f20\u91cf\u6d41\u8fdb\u884c\u6d4b\u8bd5\u63d2\u56fe\" alt=\"\u521b\u5efa LLM \u4ee5\u5728 Python \u4e2d\u4f7f\u7528\u5f20\u91cf\u6d41\u8fdb\u884c\u6d4b\u8bd5\u63d2\u56fe\" \/><\/p>\n<p>\u55e8\uff0c<\/p>\n<p>\u6211\u60f3\u6d4b\u8bd5\u4e00\u4e2a\u5c0f\u578b\u7684llm\u7a0b\u5e8f\uff0c\u6211\u51b3\u5b9a\u7528tensorflow\u6765\u505a\u3002<\/p>\n<p>\u6211\u7684\u6e90\u4ee3\u7801\u53ef\u4ee5\u5728 https:\/\/github.com\/victordalet\/first_llm<\/p>\n<hr>\n<p>\u60a8\u9700\u8981\u5b89\u88c5tensorflow\u548cnumpy<\/p>\n<pre>\n\npip install 'numpy&lt;2'\npip install tensorflow\n\n\n<\/pre>\n<hr>\n<p>\u60a8\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u6570\u636e\u5b57\u7b26\u4e32\u6570\u7ec4\u6765\u8ba1\u7b97\u4e00\u4e2a\u5c0f\u6570\u636e\u96c6\uff0c\u4f8b\u5982\u6211\u521b\u5efa\uff1a<\/p>\n<pre>\n\ndata = [\n    \"salut comment ca va\",\n    \"je suis en train de coder\",\n    \"le machine learning est une branche de l'intelligence artificielle\",\n    \"le deep learning est une branche du machine learning\",\n]\n\n\n<\/pre>\n<p>\u5982\u679c\u4f60\u6ca1\u6709\u7075\u611f\uff0c\u53ef\u4ee5\u5728kaggle\u4e0a\u627e\u5230\u4e00\u4e2a\u6570\u636e\u96c6\u3002<\/p>\n<hr>\n<p>\u4e3a\u6b64\uff0c\u6211\u4f7f\u7528\u5404\u79cd\u65b9\u6cd5\u521b\u5efa\u4e86\u4e00\u4e2a\u5c0f\u578b llm \u7c7b\u3002<\/p>\n<pre>\n\nclass llm:\n\n    def __init__(self):\n        self.model = none\n        self.max_sequence_length = none\n        self.input_sequences = none\n        self.total_words = none\n        self.tokenizer = none\n        self.tokenize()\n        self.create_input_sequences()\n        self.create_model()\n        self.train()\n        test_sentence = \"pour moi le machine learning est\"\n        print(self.test(test_sentence, 10))\n\n    def tokenize(self):\n        self.tokenizer = tokenizer()\n        self.tokenizer.fit_on_texts(data)\n        self.total_words = len(self.tokenizer.word_index) + 1\n\n    def create_input_sequences(self):\n        self.input_sequences = []\n        for line in data:\n            token_list = self.tokenizer.texts_to_sequences([line])[0]\n            for i in range(1, len(token_list)):\n                n_gram_sequence = token_list[:i + 1]\n                self.input_sequences.append(n_gram_sequence)\n\n        self.max_sequence_length = max([len(x) for x in self.input_sequences])\n        self.input_sequences = pad_sequences(self.input_sequences, maxlen=self.max_sequence_length, padding='pre')\n\n    def create_model(self):\n        self.model = sequential()\n        self.model.add(embedding(self.total_words, 100, input_length=self.max_sequence_length - 1))\n        self.model.add(lstm(150, return_sequences=true))\n        self.model.add(dropout(0.2))\n        self.model.add(lstm(100))\n        self.model.add(dense(self.total_words, activation='softmax'))\n\n    def train(self):\n        self.model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\n        x, y = self.input_sequences[:, :-1], self.input_sequences[:, -1]\n        y = tf.keras.utils.to_categorical(y, num_classes=self.total_words)\n\n        self.model.fit(x, y, epochs=200, verbose=1)\n\n\n<\/pre>\n<p>\u6700\u540e\uff0c\u6211\u4f7f\u7528\u7c7b\u7684\u6784\u9020\u51fd\u6570\u4e2d\u8c03\u7528\u7684\u6d4b\u8bd5\u65b9\u6cd5\u6765\u6d4b\u8bd5\u6a21\u578b\u3002<\/p>\n<p>\u8b66\u544a\uff1a\u5982\u679c\u751f\u6210\u7684\u5355\u8bcd\u4e0e\u524d\u4e00\u4e2a\u5355\u8bcd\u76f8\u540c\uff0c\u6211\u4f1a\u5728\u6b64\u6d4b\u8bd5\u51fd\u6570\u4e2d\u963b\u6b62\u751f\u6210\u3002<\/p>\n<pre>\n\n    def test(self, sentence: str, nb_word_to_generate: int):\n        last_word = \"\"\n        for _ in range(nb_word_to_generate):\n\n            token_list = self.tokenizer.texts_to_sequences([sentence])[0]\n            token_list = pad_sequences([token_list], maxlen=self.max_sequence_length - 1, padding='pre')\n            predicted = np.argmax(self.model.predict(token_list), axis=-1)\n            output_word = \"\"\n            for word, index in self.tokenizer.word_index.items():\n                if index == predicted:\n                    output_word = word\n                    break\n\n            if last_word == output_word:\n                return sentence\n\n            sentence += \" \" + output_word\n            last_word = output_word\n\n        return sentence\n\n\n<\/pre>\n<p>\u597d\u4e86\uff0c\u672c\u6587\u5230\u6b64\u7ed3\u675f\uff0c\u5e26\u5927\u5bb6\u4e86\u89e3\u4e86\u300a\u521b\u5efa LLM \u4ee5\u5728 Python \u4e2d\u4f7f\u7528\u5f20\u91cf\u6d41\u8fdb\u884c\u6d4b\u8bd5\u300b\uff0c\u5e0c\u671b\u672c\u6587\u5bf9\u4f60\u6709\u6240\u5e2e\u52a9\uff01\u5173\u6ce8\u7c73\u4e91\u516c\u4f17\u53f7\uff0c\u7ed9\u5927\u5bb6\u5206\u4eab\u66f4\u591a\u6587\u7ae0\u77e5\u8bc6\uff01<\/p>\n<p>      \u7248\u672c\u58f0\u660e \u672c\u6587\u8f6c\u8f7d\u4e8e\uff1adev.to \u5982\u6709\u4fb5\u72af\uff0c\u8bf7\u8054\u7cfb\u5220\u9664<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u521b\u5efa LLM \u4ee5\u5728 Python \u4e2d\u4f7f\u7528\u5f20\u91cf\u6d41\u8fdb\u884c\u6d4b\u8bd5 \u5b66\u4e60\u77e5\u8bc6\u8981\u5584\u4e8e\u601d\u8003\uff0c\u601d\u8003\uff0c\u518d\u601d\u8003\uff01\u4eca\u5929\u7c73\u4e91\u5c0f\u7f16\u5c31\u7ed9\u5927\u5bb6\u5e26\u6765\u300a\u521b\u5efa LLM \u4ee5\u5728 Python \u4e2d\u4f7f\u7528\u5f20\u91cf\u6d41\u8fdb\u884c\u6d4b\u8bd5\u300b\uff0c\u4ee5\u4e0b\u5185\u5bb9\u4e3b\u8981\u5305\u542b\u7b49\u77e5\u8bc6\u70b9\uff0c\u5982\u679c\u4f60\u6b63\u5728\u5b66\u4e60\u6216\u51c6\u5907\u5b66\u4e60\u6587\u7ae0\uff0c\u5c31\u90fd\u4e0d\u8981\u9519\u8fc7\u672c\u6587\u5566~\u8ba9\u6211\u4eec\u4e00\u8d77\u6765\u770b\u770b\u5427\uff0c\u80fd\u5e2e\u52a9\u5230\u4f60\u5c31\u66f4\u597d\u4e86\uff01 \u55e8\uff0c \u6211\u60f3\u6d4b\u8bd5\u4e00\u4e2a\u5c0f\u578b\u7684llm\u7a0b\u5e8f\uff0c\u6211\u51b3\u5b9a\u7528tensorflow\u6765\u505a\u3002 \u6211\u7684\u6e90\u4ee3\u7801\u53ef\u4ee5\u5728 https:\/\/github.com\/victordalet\/first_llm \u60a8\u9700\u8981\u5b89\u88c5tensorflow\u548cnumpy pip install &#8216;numpy&lt;2&#8217; pip install tensorflow \u60a8\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u6570\u636e\u5b57\u7b26\u4e32\u6570\u7ec4\u6765\u8ba1\u7b97\u4e00\u4e2a\u5c0f\u6570\u636e\u96c6\uff0c\u4f8b\u5982\u6211\u521b\u5efa\uff1a data = [ &#8220;salut comment ca va&#8221;, &#8220;je suis en train de coder&#8221;, &#8220;le machine learning est une branche de l&#8217;intelligence artificielle&#8221;, &#8220;le deep learning est une branche du machine learning&#8221;, ] \u5982\u679c\u4f60\u6ca1\u6709\u7075\u611f\uff0c\u53ef\u4ee5\u5728kaggle\u4e0a\u627e\u5230\u4e00\u4e2a\u6570\u636e\u96c6\u3002 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[],"class_list":["post-49675","post","type-post","status-publish","format-standard","hentry","category-16"],"_links":{"self":[{"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/posts\/49675","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/comments?post=49675"}],"version-history":[{"count":0,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/posts\/49675\/revisions"}],"wp:attachment":[{"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/media?parent=49675"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/categories?post=49675"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/tags?post=49675"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}