{"id":3341,"date":"2024-11-23T17:00:05","date_gmt":"2024-11-23T09:00:05","guid":{"rendered":"https:\/\/changjinyuan.com\/?p=3341"},"modified":"2024-12-28T22:40:23","modified_gmt":"2024-12-28T14:40:23","slug":"%e5%88%98%e5%8f%b2%e6%af%93-wei-l-lv-s-li-m-2023-stability-and-generalization-of-l-p-regularized-stochastic-learning-for-graph-convolutional-networks-international-joint-conferences-on","status":"publish","type":"post","link":"https:\/\/changjinyuan.com\/index.php\/publications\/publications-all\/3341\/","title":{"rendered":"\u5218\u53f2\u6bd3, Wei, L., Lv, S., &#038; Li, M. (2023). Stability and generalization of l p-regularized stochastic learning for graph convolutional networks. International Joint Conferences on Artificial Intelligence (IJCAI)."},"content":{"rendered":"<p>Graph convolutional networks (GCN) are viewed asone of the most popular representations among thevariants of graph neural networks over graph dataand have shown powerful performance in empiricalexperiments. That l2-based graph smoothing enforces the global smoothness of GCN, while (soft)l1-based sparse graph learning tends to promotesignal sparsity to trade for discontinuity. This paper aims to quantify the trade-off of GCN betweensmoothness and sparsity, with the help of a generall, \u2113p-regularized (1 &lt; p\u2264 2) stochastic learning pro.posed within. While stability-based generalizationanalyses have been given in prior work for a secondderivative objectiveness function, our \u2113p-regularized learning scheme does not satisfy such a smooth con.dition. To tackle this issue, we propose a novel SGD proximal algorithm for GCNs with an inexactoperator. For a single-layer GCN, we establish anexplicit theoretical understanding of GCN with the \u2113p-regularized stochastic learning by analyzing thestability of our SGD proximal algorithm. We conduct multiple empirical experiments to validate ourtheoretical findings.<\/p>\r\n\r\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" style=\"width: 100%; height: 600px;\" data=\"https:\/\/changjinyuan.com\/wp-content\/uploads\/2024\/11\/Stability-and-Generalization-of-\u2113p-Regularized-Stochastic-Learning-for-GCN.pdf\" type=\"application\/pdf\" width=\"300\" height=\"150\" aria-label=\"\u5d4c\u5165 Stability and Generalization of \u2113p-Regularized Stochastic Learning for GCN\"><\/object><a id=\"wp-block-file--media-9b9b65c7-c3ec-4242-a510-85efba383021\" href=\"https:\/\/changjinyuan.com\/wp-content\/uploads\/2024\/11\/Stability-and-Generalization-of-\u2113p-Regularized-Stochastic-Learning-for-GCN.pdf\">Stability and Generalization of \u2113p-Regularized Stochastic Learning for GCN<\/a><a class=\"wp-block-file__button wp-element-button\" href=\"https:\/\/changjinyuan.com\/wp-content\/uploads\/2024\/11\/Stability-and-Generalization-of-\u2113p-Regularized-Stochastic-Learning-for-GCN.pdf\" download=\"\" aria-describedby=\"wp-block-file--media-9b9b65c7-c3ec-4242-a510-85efba383021\">\u4e0b\u8f7d<\/a><\/div>\r\n","protected":false},"excerpt":{"rendered":"<p>Graph convolutional networks (GCN) are viewed asone of the most popular representations among thevariants of graph neural networks over graph dataand have shown powerful performance in empiricalexperiments. That l2-based graph smoothing enforces the global smoothness of GCN, while (soft)l1-based sparse graph learning tends to promotesignal sparsity to trade for discontinuity. This paper aims to quantify the trade-off of GCN betweensmoothness and sparsity, with the help of a generall, \u2113p-regularized (1 < p\u2264 2) stochastic learning pro.posed within. While stability-based generalizationanalyses have been given in prior work for a secondderivative objectiveness function, our \u2113p-regularized learning scheme does not satisfy such a smooth con.dition. To tackle this issue, we propose a novel SGD proximal algorithm for GCNs with an inexactoperator. For a single-layer GCN, we establish anexplicit theoretical understanding of GCN with the \u2113p-regularized stochastic learning by analyzing thestability of our SGD proximal algorithm. We conduct multiple empirical experiments to validate ourtheoretical findings.\n<\/p>\n","protected":false},"author":1,"featured_media":3509,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[15],"tags":[],"class_list":["post-3341","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-publications-all"],"acf":[],"lang":"cn","translations":{"cn":3341,"en":3343},"pll_sync_post":[],"_links":{"self":[{"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/posts\/3341","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/comments?post=3341"}],"version-history":[{"count":3,"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/posts\/3341\/revisions"}],"predecessor-version":[{"id":3881,"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/posts\/3341\/revisions\/3881"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/media\/3509"}],"wp:attachment":[{"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/media?parent=3341"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/categories?post=3341"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/changjinyuan.com\/index.php\/wp-json\/wp\/v2\/tags?post=3341"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}