<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0">
<channel>
<title><![CDATA[linuxの飘扬]]></title> 
<link>https://www.linuxfly.org/index.php</link> 
<description><![CDATA[Power by www.linuxfly.org]]></description> 
<language>zh-cn</language> 
<copyright><![CDATA[linuxの飘扬]]></copyright>
<item>
<link>https://www.linuxfly.org/post/733/</link>
<title><![CDATA[修改 k3s 证书有效期时间]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[Kubernetes]]></category>
<pubDate>Wed, 17 Jan 2024 10:38:16 +0000</pubDate> 
<guid>https://www.linuxfly.org/post/733/</guid> 
<description>
<![CDATA[ 
	好久没有更新了，遇到个问题，简单记录一下吧。<br/><br/>创建 k3s 时，默认 CA 证书有效期 10 年，Client 证书 1 年。在 Client 证书到期前，可以通过重启 k3s 服务来自动延期（不影响运行中 Pod）。但如果是 CA 证书到期，那就麻烦多了，官方的<a href="https://docs.k3s.io/zh/cli/certificate#%E8%BD%AE%E6%8D%A2%E8%87%AA%E5%AE%9A%E4%B9%89-ca-%E8%AF%81%E4%B9%A6" target="_blank">轮换证书</a>的方式相当的复杂。<br/><br/>所以，网上才有了通过自定义 100 年时效的 CA 证书，以及修改系统时间，生成更长 Client 证书的方式。<br/><br/>参考：<br/><a href="https://www.cnblogs.com/KSPT/p/16688400.html" target="_blank">K3s生成100年CA证书 </a><br/><a href="https://www.cnblogs.com/KSPT/p/16688336.html" target="_blank">K3s生成100年非CA证书</a><br/><br/>这方式不用修改 k3s 源码，可以通过脚本来处理，但调整系统时间可能会引发其他的问题，而且步骤也比较多。<br/><br/>既然证书是 k3s 生成的，那么应该从 k3s 代码中会有这部分的处理逻辑。经过分析，其实 k3s 是引用 <a href="https://github.com/rancher/dynamiclistener" target="_blank">dynamiclistener</a> 这个库来生成证书的。<br/><br/>查看 dynamiclistener 的说明，其原来就有提供一个 CATTLE_NEW_SIGNED_CERT_EXPIRATION_DAYS 来控制生成 Client 证书的有效期时间。但 k3s 的 install.sh 脚本中没有引用（首次启动 k3s 服务时）。<br/><br/>沿用类似的方法，对 dynamiclistener 做了简单的修改，增加一个环境变量 CATTLE_NEW_SIGNED_CA_EXPIRATION_YEARS ，可以定义 CA 证书的有效期，单位是年，默认 100 年。<br/>具体 commit 见<a href="https://github.com/qkboy/dynamiclistener/commit/587be474c1897c64fd28339aa91ca19f978b33bc" target="_blank">这里</a>，tag 对应 v0.3.6-ske.3 。<br/><br/>然后，可以修改 k3s 项目的 go.mod ，替换为修改过的 dynamiclistener。<br/><div class="code"><br/>replace github.com/rancher/dynamiclistener =&gt; github.com/qkboy/dynamiclistener v0.3.6-ske.3<br/>require github.com/rancher/dynamiclistener v0.0.0-00010101000000-000000000000<br/></div><br/><br/>保存后，手动运行一下 go mod tidy 生成新的 go.sum ，重新编译 k3s 即可。<br/><div class="code"># SKIP_VALIDATE=true make</div><br/><br/>从源码编译 k3s 也没什么复杂的，只要当前环境装了 docker ，可以科学上网，直接运行就能搞掂。<br/><br/>最后一步，改改 instal.sh ，让在首次启动 k3s 时可以读到新增的两个环境变量：<br/><div class="code"># --- capture current env and create file containing k3s_ variables ---<br/>create_env_file() &#123;<br/>&nbsp;&nbsp;&nbsp;&nbsp;info &quot;env: Creating environment file $&#123;FILE_K3S_ENV&#125;&quot;<br/>&nbsp;&nbsp;&nbsp;&nbsp;$SUDO touch $&#123;FILE_K3S_ENV&#125;<br/>&nbsp;&nbsp;&nbsp;&nbsp;$SUDO chmod 0600 $&#123;FILE_K3S_ENV&#125;<br/>&nbsp;&nbsp;&nbsp;&nbsp;sh -c export &#124; while read x v; do echo $v; done &#124; grep -E &#039;^(K3S&#124;CONTAINERD)_&#039; &#124; $SUDO tee $&#123;FILE_K3S_ENV&#125; &gt;/dev/null<br/>&nbsp;&nbsp;&nbsp;&nbsp;sh -c export &#124; while read x v; do echo $v; done &#124; grep -Ei &#039;^(NO&#124;HTTP&#124;HTTPS)_PROXY&#039; &#124; $SUDO tee -a $&#123;FILE_K3S_ENV&#125; &gt;/dev/null<br/>&nbsp;&nbsp;&nbsp;&nbsp;sh -c export &#124; while read x v; do echo $v; done &#124; grep -E &#039;^(CATTLE_NEW_SIGNED_CA_EXPIRATION_YEARS&#124;CATTLE_NEW_SIGNED_CERT_EXPIRATION_DAYS)&#039; &#124; $SUDO tee -a $&#123;FILE_K3S_ENV&#125; &gt;/dev/null<br/>&#125;</div><br/><br/>用修改后的 install.sh 运行安装 k3s：<br/><div class="code"># INSTALL_K3S_SKIP_DOWNLOAD=true CATTLE_NEW_SIGNED_CERT_EXPIRATION_DAYS=3650 CATTLE_NEW_SIGNED_CA_EXPIRATION_YEARS=50 ./install.sh</div><br/><br/>结果：<br/><div class="code">root@env2-node01:~# for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done<br/>/var/lib/rancher/k3s/server/tls/client-admin.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT &lt;-- 客户端 10年<br/>/var/lib/rancher/k3s/server/tls/client-ca.crt<br/>notAfter=Jan&nbsp;&nbsp;3 07:46:06 2074 GMT &lt;-- 根 CA 50 年<br/>/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt<br/>notAfter=Jan&nbsp;&nbsp;3 07:46:06 2074 GMT<br/>/var/lib/rancher/k3s/server/tls/client-controller.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/client-k3s-cloud-controller.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/client-k3s-controller.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/client-kube-proxy.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/client-scheduler.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/client-supervisor.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/request-header-ca.crt<br/>notAfter=Jan&nbsp;&nbsp;3 07:46:06 2074 GMT<br/>/var/lib/rancher/k3s/server/tls/server-ca.crt<br/>notAfter=Jan&nbsp;&nbsp;3 07:46:06 2074 GMT<br/>/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt<br/>notAfter=Jan&nbsp;&nbsp;3 07:46:06 2074 GMT<br/>/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>root@env2-node01:~# for i in `ls /var/lib/rancher/k3s/server/tls/etcd/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done<br/>/var/lib/rancher/k3s/server/tls/etcd/client.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt<br/>notAfter=Jan&nbsp;&nbsp;3 07:46:06 2074 GMT<br/>/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT<br/>/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt<br/>notAfter=Jan&nbsp;&nbsp;3 07:46:06 2074 GMT<br/>/var/lib/rancher/k3s/server/tls/etcd/server-client.crt<br/>notAfter=Jan 13 07:46:06 2034 GMT</div><br/><br/>结束语，如果你觉得上面修改比较复杂。可以使用我修改过的两个 k3s 版本：<br/><a href="https://github.com/qkboy/k3s/tree/v1.24.17%2Bk3s1-longcert" target="_blank">v1.24.17+k3s1 </a><br/><a href="https://github.com/qkboy/k3s/tree/v1.29.0%2Bk3s1-longcert" target="_blank">v1.29.0+k3s1</a><br/><br/>下载对应的分支后，直接编译即可。<br/>相关的修改已经提了 PR（<a href="https://github.com/rancher/dynamiclistener/pull/91" target="_blank">#90</a>），等待回复。<br/>有空我再把编译好的二进制文件放上来吧。<br/>Tags - <a href="https://www.linuxfly.org/tags/k3s/" rel="tag">k3s</a> , <a href="https://www.linuxfly.org/tags/certificate/" rel="tag">certificate</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/kubernetes-19-conflict-with-centos7/</link>
<title><![CDATA[【原】kubernetes 1.9 与 CentOS 7.3 内核兼容问题]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[Kubernetes]]></category>
<pubDate>Fri, 30 Mar 2018 12:39:02 +0000</pubDate> 
<guid>https://www.linuxfly.org/kubernetes-19-conflict-with-centos7/</guid> 
<description>
<![CDATA[ 
	&nbsp;&nbsp;&nbsp;&nbsp;生产环境发现不定时 Java 应用出现 coredump 故障，测试环境不定时出现写入 /cgroup/memory 报&nbsp;&nbsp;no space left on device 的故障，导致整个 kubernetes node 节点无法使用。设置会随着堆积的 cgroup 越来越多，docker ps 执行异常，直到把内存吃光，机器挂死。<br/>&nbsp;&nbsp;&nbsp;&nbsp;典型报错：<br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">kubelet.ns-k8s-node001.root.log.ERROR.20180214-113740.15702:1593018:E0320 04:59:09.572336 15702 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sa<br/>ndbox container for pod "osp-xxx-com-ljqm19-54bf7678b8-bvz9s": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:258: applying cgroup configuration<br/>for process caused &#92;"mkdir /sys/fs/cgroup/memory/kubepods/burstable/podf1bd9e87-1ef2-11e8-afd3-fa163ecf2dce/8710c146b3c8b52f5da62e222273703b1e3d54a6a6270a0ea7ce1b194f1b5053: <span style="color: #FF0000;">no space left on device</span>&#92;""</div></div><br/>或者<br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">Mar 26 18:36:59 ns-k8s-node-s0054 kernel: SLUB: Unable to allocate memory on node -1 (gfp=0x8020)<br/>Mar 26 18:36:59 ns-k8s-noah-node001 kernel: cache: ip6_dst_cache(1995:6b6bc0c9f30123084a409d89a300b017d26ee5e2c3ac8a02c295c378f3dbfa5f), object size: 448, buffer size: 448, default order: 2, min order: 0</div></div><br/>&nbsp;&nbsp;&nbsp;&nbsp;该问题发生前后，进行过 kubernetes 1.6 到 1.9 的升级工作。怀疑问题与 kubernetes 、内核有关。<br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/kubernetes/" rel="tag">kubernetes</a> , <a href="https://www.linuxfly.org/tags/docker/" rel="tag">docker</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/bash-modify-pipesize-invalid-argument/</link>
<title><![CDATA[[原] bash 下修改 ulimit 的 pipe size 报错]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[基础知识]]></category>
<pubDate>Fri, 16 Mar 2018 02:57:20 +0000</pubDate> 
<guid>https://www.linuxfly.org/bash-modify-pipesize-invalid-argument/</guid> 
<description>
<![CDATA[ 
	&nbsp;&nbsp;&nbsp;&nbsp;用 python subprocess 捕获命令行输出结果失去响应，怀疑是 pipe size 太小，尝试修改。<br/>&nbsp;&nbsp;&nbsp;&nbsp;但报错：<br/><div class="code"># ulimit -p 16<br/>-bash: ulimit: pipe size: cannot modify limit: Invalid argument</div><br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/bash/" rel="tag">bash</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/restore-file-from-ext4-filesystem-disk/</link>
<title><![CDATA[从 ext4 磁盘中恢复被误删除的文件]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[故障处理]]></category>
<pubDate>Fri, 13 Nov 2015 02:06:33 +0000</pubDate> 
<guid>https://www.linuxfly.org/restore-file-from-ext4-filesystem-disk/</guid> 
<description>
<![CDATA[ 
	<strong>1.问题</strong><br/>因误操作，把磁盘中部分的文件删除了，需要进行恢复。<br/><strong><span style="color: #FF0000;">※ 注意：<br/>在进行恢复前，不要对需要恢复的分区进行写入的操作。<br/>如果分区在单独的磁盘上，应该把该磁盘卸载后，进行修复。（非 mount 状态）<br/>如果分区与根分区在同一个磁盘上，那么，可以把分区挂载为 ro 只读状态；</span></strong><br/><br/><div class="code">mount -o remount,ro /dev/sdX1</div><br/><strong><span style="color: #FF0000;">如果需要修复的就是根分区所在的分区，那么，只能把机器关闭后，把磁盘挂载到其他的机器上进行修复。</span></strong><br/>◎ 修复的原理：<br/>通过遍历文件系统的 journal ，找到对应的 inode 位置，然后组成正常的文件。<br/><strong><span style="color: #FF0000;">所以，若使用 mkfs.ext4 等格式化的磁盘，superblock 全部被刷写，则是无法修复的。</span></strong><br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/ext4/" rel="tag">ext4</a> , <a href="https://www.linuxfly.org/tags/restore/" rel="tag">restore</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/windows_install_mysql_python_library/</link>
<title><![CDATA[[原]在Windows 下安装 MySQL-python 1.2.5]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[Python]]></category>
<pubDate>Sun, 07 Jun 2015 11:12:28 +0000</pubDate> 
<guid>https://www.linuxfly.org/windows_install_mysql_python_library/</guid> 
<description>
<![CDATA[ 
	&nbsp;&nbsp;&nbsp;&nbsp;在Windows 下调试 Python 还是挺麻烦的。通过PyCharm 来安装个MySQL-python 的库都搞了大半天。分别尝试 1.2.3、1.2.4 和 1.2.5 都有不同的错误。+_+<br/>&nbsp;&nbsp;&nbsp;&nbsp;最后确定还是在 1.2.5 版本下来解决，需要解决的问题就是这个：<br/><a href="http://stackoverflow.com/questions/1972259/cannot-open-include-file-config-win-h-no-such-file-or-directory-while-inst" target="_blank">“Cannot open include file: 'config-win.h': No such file or directory” while installing mysql-python</a><br/><br/>上面是在 1.2.4 版本上的，后来在 1.2.5 上面应该是解决的。但实际上，1.2.5 在Windows 64 位环境下还是有问题的，原因见后面的说明。<br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/mysql/" rel="tag">mysql</a> , <a href="https://www.linuxfly.org/tags/python/" rel="tag">python</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/nova_migrate_specify_destination_host/</link>
<title><![CDATA[[原]执行nova migrate 的时候指定目标主机]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[OpenStack]]></category>
<pubDate>Fri, 05 Jun 2015 10:21:50 +0000</pubDate> 
<guid>https://www.linuxfly.org/nova_migrate_specify_destination_host/</guid> 
<description>
<![CDATA[ 
	&nbsp;&nbsp;&nbsp;&nbsp;通过修改nova 的源码，在nova client 和 nova server 支持 migrate 离线迁移指定目标主机。<br/>&nbsp;&nbsp; （仅适用于RDO icehouse openstack-nova-2014.1.3-3 版本更新）<br/>※ 注意：在2015-06-10 前提供的patch 有Bug，打入patch 后，执行resize 会报“NoValidHost: No valid host was found.”。原因是 compute/api.py 中 resize() 方法参数顺序的问题，下面的 patch 已经修改。<br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/nova/" rel="tag">nova</a> , <a href="https://www.linuxfly.org/tags/openstack/" rel="tag">openstack</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/post/726/</link>
<title><![CDATA[[原]解决 OpenvSwitch terminating with signal 14 (Alarm clock) 错误]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[OpenStack]]></category>
<pubDate>Thu, 07 May 2015 06:33:01 +0000</pubDate> 
<guid>https://www.linuxfly.org/post/726/</guid> 
<description>
<![CDATA[ 
	在配置br-ex 桥接到eth0 网卡后，重启neutron-openvswitch-agent 服务后，一直提示报错，无法创建patch-int 和 patch-tun 网卡（极少时候是可以的）。<br/>这导致openvswitch 在不断的重启，而对外的网络（与RabbitMQ 连接）也在不断的重启中。<br/><br/>日志：<br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">2015-05-06 23:40:43.299 18254 ERROR neutron.agent.linux.ovs_lib [req-9ec5d95e-3626-4494-b043-35d5211747d8 None] Unable to execute ['ovs-vsctl', '--timeout=10', 'add-port', 'br-int', 'patch-tun', '--', 'set', 'Interface', 'patch-tun', 'type=patch', 'options:peer=patch-int']. Exception:<br/>Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'add-port', 'br-int', 'patch-tun', '--', 'set', 'Interface', 'patch-tun', 'type=patch', 'options:peer=patch-int']<br/>Exit code: 242<br/>Stdout: ''<br/>Stderr: '2015-05-06T15:40:43Z&#124;00002&#124;fatal_signal&#124;WARN&#124;terminating with signal 14 (Alarm clock)&#92;n'<br/>2015-05-06 23:40:45.936 18254 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 192.168.209.137:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 9 seconds.<br/>2015-05-06 23:40:53.530 18254 ERROR neutron.agent.linux.utils [req-9ec5d95e-3626-4494-b043-35d5211747d8 None]<br/>Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'add-port', 'br-tun', 'patch-int', '--', 'set', 'Interface', 'patch-int', 'type=patch', 'options:peer=patch-tun']<br/>Exit code: 242<br/>Stdout: ''<br/><span style="color: #FF0000;">Stderr: '2015-05-06T15:40:53Z&#124;00002&#124;fatal_signal&#124;WARN&#124;terminating with signal 14 (Alarm clock)&#92;n'</span>2015-05-06 23:40:53.530 18254 ERROR neutron.agent.linux.ovs_lib [req-9ec5d95e-3626-4494-b043-35d5211747d8 None] Unable to execute ['ovs-vsctl', '--timeout=10', 'add-port', 'br-tun', 'patch-int', '--', 'set', 'Interface', 'patch-int', 'type=patch', 'options:peer=patch-tun']. Exception:<br/>Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'add-port', 'br-tun', 'patch-int', '--', 'set', 'Interface', 'patch-int', 'type=patch', 'options:peer=patch-tun']<br/>Exit code: 242<br/>Stdout: ''<br/>Stderr: '2015-05-06T15:40:53Z&#124;00002&#124;fatal_signal&#124;WARN&#124;terminating with signal 14 (Alarm clock)&#92;n'<br/><span style="color: #FF0000;">2015-05-06 23:40:53.638 18254 ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-9ec5d95e-3626-4494-b043-35d5211747d8 None] Failed to create OVS patch port. Cannot have tunneling enabled on this agent, since this version of OVS does not support tunnels or patch ports. Agent terminated!</span>2015-05-06 23:48:13.179 18701 INFO neutron.common.config [-] Logging enabled!<br/>2015-05-06 23:48:13.728 18701 ERROR neutron.agent.linux.utils [-]</div></div><br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/openstack/" rel="tag">openstack</a> , <a href="https://www.linuxfly.org/tags/neutron/" rel="tag">neutron</a> , <a href="https://www.linuxfly.org/tags/openvswitch/" rel="tag">openvswitch</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/post/725/</link>
<title><![CDATA[[原]解决CentOS 7 下corosync 2.3.3 无法组成两个节点集群的问题]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[HA]]></category>
<pubDate>Tue, 05 May 2015 03:21:54 +0000</pubDate> 
<guid>https://www.linuxfly.org/post/725/</guid> 
<description>
<![CDATA[ 
	采用corosync 构成Pacemaker 集群。但发现启动corosync 服务后，不会自动启动pacemaker 服务。<br/>经确认，在CentOS 7 的corosync 2.3.3 下，pacemaker 默认是disable 的，需要自行激活。<br/><br/>启动corosync 服务后，发现两个节点无法构成集群，没有Nodes：<br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">[root@gz-controller-209100 ~]# crm status&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br/>Last updated: Mon May&nbsp;&nbsp;4 14:43:13 2015<br/>Last change: Mon May&nbsp;&nbsp;4 14:26:45 2015<br/>Current DC: NONE<br/>0 Nodes configured<br/>0 Resources configured</div></div><br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/corosync/" rel="tag">corosync</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/post/724/</link>
<title><![CDATA[[原]使用RDO juno dev1462 部署mongodb 失败的问题]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[OpenStack]]></category>
<pubDate>Thu, 16 Apr 2015 03:09:25 +0000</pubDate> 
<guid>https://www.linuxfly.org/post/724/</guid> 
<description>
<![CDATA[ 
	&nbsp;&nbsp;&nbsp;&nbsp;执行 RDO juno openstack-packstack-dev1462 版本部署的时候，执行mongodb 部署失败，报如下的错误：<br/><br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">[root@controller01 ~]# packstack --answer-file=./packstack-answers-20150415-110139.txt&nbsp;&nbsp; <br/>192.168.209.137_mongodb.pp:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [ ERROR ]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br/>Applying Puppet manifests&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [ ERROR ]<br/><br/>ERROR : Error appeared during Puppet run: 192.168.209.137_mongodb.pp<br/>Error: Unable to connect to mongodb server! (192.168.209.137:27017)<br/>You will find full trace in log /var/tmp/packstack/20150415-161743-hbPMV4/manifests/192.168.209.137_mongodb.pp.log</div></div><br/>实际错误为访问mongo 服务无法连接：<br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">[root@controller01 ~]# cat /var/tmp/packstack/20150415-161743-hbPMV4/manifests/192.168.209.137_mongodb.pp.log<br/>......<br/>Notice: Failed to connect to mongodb within timeout window of 240 seconds; giving up.<br/>Error: Unable to connect to mongodb server! (192.168.209.137:27017)<br/>Error: /Stage[main]/Mongodb::Server::Service/Mongodb_conn_validator[mongodb]/ensure: change from absent to present <span style="color: #FF0000;">failed: Unable to connect to mongodb server! (192.168.209.137:27017)</span><br/>Notice: /Stage[main]/Mongodb::Server/Anchor[mongodb::server::end]: Dependency Mongodb_conn_validator[mongodb] has failures: true<br/>Warning: /Stage[main]/Mongodb::Server/Anchor[mongodb::server::end]: Skipping because of failed dependencies</div></div><br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/openstack/" rel="tag">openstack</a> , <a href="https://www.linuxfly.org/tags/rdo/" rel="tag">rdo</a> , <a href="https://www.linuxfly.org/tags/packstack/" rel="tag">packstack</a>
]]>
</description>
</item><item>
<link>https://www.linuxfly.org/post/723/</link>
<title><![CDATA[[原]创建实例报 Virtual Interface creation failed 的错误]]></title> 
<author>linuxing &lt;emos#linuxfly.org&gt;</author>
<category><![CDATA[OpenStack]]></category>
<pubDate>Wed, 04 Feb 2015 08:42:56 +0000</pubDate> 
<guid>https://www.linuxfly.org/post/723/</guid> 
<description>
<![CDATA[ 
	&nbsp;&nbsp;&nbsp;&nbsp;解决在节点和实例VM 较多的情况下，创建实例报错：<br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">Virtual Interface creation failed</div></div><br/>对应Neutron OpenvSwitch Agent 的错误：<br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">Timeout while waiting on RPC response</div></div><br/><br/>经查询相关资料，在Juno 之前的版本，RPC 存在随节点增加，以指数方式增长的问题。<br/>此外，还有使用iptables 完成security_group&nbsp;&nbsp;设置需时较长的问题。<br/><br/>创建实例时，没创建一个Port，此时，因为系统中某个安全组有成员变化，所以需要通知到各个节点，传递这样一个信息：一些安全组中的成员有变化，如果你有对这些安全组的引用，请更新对应的iptables规则。对于linux bridge和ovs来说，需要由neutron l2 agent处理更新请求。<br/><br/>这两项结合起来，导致在宿主机节点和VM 较多的情况下，security_group 每个返回时间较长，port 创建rpc timeout：<br/><div class="quote"><div class="quote-title">引用</div><div class="quote-content">Timeout: Timeout while waiting on RPC response - topic: "q-plugin", RPC method: "security_group_rules_for_devices" info: "<unknown>"<br/></div></div><br/>最终Nova 在等待Neutron 创建Port 超时，就报Virtual Interface creation failed 错误。<br/>............<br/><br/>Tags - <a href="https://www.linuxfly.org/tags/openstack/" rel="tag">openstack</a>
]]>
</description>
</item>
</channel>
</rss>